Unfortunately I can’t get to Museums and the Web 2015 in person. So instead I’ve been taking a good luck at the papers from the conference. Here are my favourites:
Mapping the way to a more digitally inclusive museum
View paper here by Alyssa McLeod.
Interesting discussion about using digital mapping tools in exhibitions (specifically on touch screens/kiosks within the actual shows). The paper includes a link to the Victoria and Albert’s Digital Explorer which I haven’t seen before (a description of it is at the bottom of this post).
Upon visiting Alyssa McLeod’s website, I found the ROM Collections interface demo that she designed. The Objects view is particularly interesting, view it here. I thought the colours might be related to the most dominant colouring the object, but in fact seem to illustrate what part of the collection they are from. It will be interesting to see where this demo progresses to.
View paper here by Shyam Oberoi, Robert Stein, Kristen Arnold
[The Dallas Museums of Art] microsite suffered from the typical museum website complaints: inefficient search, awkward presentation of images and metadata, unintuitive user interface.
Their new backend site uses Django, with MongoDB. There is an interesting discussion about using a ‘crosswalk’ to translate industry specific terminology into text a user can understand. The fronted site is built in Drupal with data from the ‘Brain’ coming from API calls that are cached and only updated when the object metadata changes.
The new landing page has three entry points for the Web visitor: search, browse by facet, or a random assortment of highlights from the DMA’s collection. From a technical perspective, just about everything in the online collection is the result of a search: the browse categories return search results for the most common artists or materials; the number of objects per department or location; the colors via the palette extracted by Brain; or the results between two numeric dates. Since the search box is the most common entry point, we enabled type-ahead auto-complete on certain indexed fields (artist, title) to assist with spelling errors, but we will also allow users to search on any piece of object metadata in the collection.
It’s perhaps also worth noting that we deliberately avoid anything that resembled an “advanced” search, where a user would be forced to know which fields contained which information.
About faceting in search results
This faceting of results by predefined categories is familiar to users of Amazon and similar e-commerce sites, and provides users with a simple way to further refine their results without additional typing.
The collection site is available here: https://www.dma.org/collection.
Assessing the user experience (UX) of online museum collections: Perspectives from design and museum professionals
View paper here by Craig MacDonald
A typical feature of museum websites is the online collection, which was initially conceived as a way to provide subject-matter experts with convenient access to museum holdings without needing to be physically present (Rayward & Twidale, 1999; Jones, 2007). However, studies have shown that online museum collections are among the least popular features of a museum website (Haynes & Zambonini, 2007; Fantoni & Stein, 2012).
a large number of people want to find and view museum objects digitally but have been deterred from doing so due to the poor user experience (UX) of existing online-collection interfaces.
This paper discusses the creation of a UX rubric to use when assessing an online museum collection. There are some interesting points about evaluation and general UX background information.
Other interesting looking papers:
- What the Fonds?! The ups and downs of digitising Tate’s Archive
- When to ask and when to shut up: How to get visitor feedback on digital interactives
- Beyond browsing and searching: Design and development of a platform for supporting curatorial research and content creation
- Reconsidering searching and browsing on the Cooper Hewitt’s Collections website
- Art + Data: Building the SFMOMA Collection API
- Art Tracks: Visualizing the stories and lifespan of an artwork
- Corning Museum of Glass
- DMA: https://www.dma.org/collection
- V&A Digital Explorer: http://www.vam.ac.uk/digital/map/
The V&A’s Digital Explorer
A brief description:
The Digital Explorer open with a birds eye view map of the V&A. There are no instructions as to what to do. Across the top of the screen is a search field and the numbers 0 to 6. When you click on a room it becomes highlighted with other rooms also taking on that colour but with slight opacity. At the top of the screen, below the numbers the title of the gallery you have clicked on appears. On most galleries a block of thumbnails also appears at the bottom of the screen. These are quite small but are not cropped in any way. If you click on a thumb it loads a slightly larger work image, with the title and location. If you click the work again it loads a full screen modal type window display of all the work information. There is an image of the work in the left corner – although no larger than the second state, and a blue ‘x’ in the top right corner. On some works instead of just having the 1 image of the work in the corner it contains a number of images – I assume all the images they have of that object, a finger pointer is displayed whilst hovering over the text and images, but clicking on them doesn’t do anything. The text on some screens extends below the length of the modal style window, but you can’t scroll down. The window allows about 100px of space from the bottom of it to the top of the title of the work thumb you click on – this ensures that you know you can easily return to the previous screen you were looking at. Searching for a name allows you to select from a list of objects, when you select an object it then shows you where that object is within the gallery.