The aggregation of items from social media streams, such as Flickr photos and Twitter tweets, into meaningful groups can help users contextualise and effectively consume the torrents of information on the social web. This task is challenging due to the scale of the streams and the inherently multimodal nature of the information being contextualised.
In this talk we’ll describe some of our recent work on trend and event detection in multimedia data streams. We focus on scalable streaming algorithms that can be applied to multimedia data streams from the web and the social web. The talk will cover two particular aspects of our work: mining Twitter for trending images by detecting near duplicates; and detecting social events in multimedia data with streaming clustering algorithms. We will describe in detail our techniques, and explore open questions and areas of potential future work, in both these tasks.
Photo collections are also getting help from the science of searching. If you’ve ever done a Google image search you’ll know they’re not always brilliant – that’s because the search engine’s not searching the images themselves, it’s looking at the words around them. But a team at the University of Southampton is giving computers a better eye for what’s actually in an image, so not only can you find what you’re after more easily, the computer can learn how to sort new photos itself.
Southampton has a long history of research in the areas of multimedia information analysis. This talk will focus on some of the work we have been involved with in the areas of multimedia analysis and search. The talk will start by looking at the broad range of multimedia analysis from low-level features to semantic understanding. This will be accompanied by demos of different multimedia analysis and search software developed over the years at Southampton.
We'll then explore the underpinnings of visual information analysis and see some computer vision techniques in action. In particular, we'll then explore how visual content can be represented in ways analogous to textual information and how techniques developed for analysing and indexing text can be adapted to images.
Finally, we'll look at how the next generation of multimedia analysis software is being developed, and introduce two open-source software projects being developed at Southampton that are paving the way for future research.
Southampton has a long history of research in the areas of multimedia information analysis. This talk will focus on some of the recent work we have been involved with in the area of image search. The talk will start by looking at how image content can be represented in ways analogous to textual information and how techniques developed for indexing text can be adapted to images. In particular, the talk will introduce ImageTerrier, a research platform for image retrieval that is built around the University of Glasgow's Terrier text retrieval software. The talk will also cover some of our recent work on image classification and image search result diversification.
Southampton has a long history of research in the areas of multimedia information analysis. This talk will focus on some of the recent work we have been involved with in the area of image search. The talk will start by looking at how image content can be represented in ways analogous to textual information and how techniques developed for indexing text can be adapted to images. In particular, the talk will introduce ImageTerrier, a research platform for image retrieval that is built around Glasgow's Terrier software. The talk will also cover some of our recent work on image classification and image search result diversification.
The aggregation of items from social media streams, such as Flickr photos and Twitter tweets, into meaningful groups can help users contextualise and effectively consume the torrents of information on the social web. This task is challenging due to the scale of the streams and the inherently multimodal nature of the information being contextualised.
In this talk I'll describe some of our recent work on trend and event detection in multimedia data streams. We focus on scalable streaming algorithms that can be applied to multimedia data streams from the web and the social web. The talk will cover two particular aspects of our work: mining Twitter for trending images by detecting near duplicates; and detecting social events in multimedia data with streaming clustering algorithms. I'll will describe in detail our techniques, and explore open questions and areas of potential future work, in both these tasks.
This seminar takes the form of a research discussion which will focus on the Internet of Things (IoT) research being undertaken in WAIS and other research groups in ECS. IoT is a significant emerging research area, with funding for research available from many channels including new H2020 programmes and the TSB. We have seen examples of IoT devices being built in WAIS and other ECS groups, e.g. in sensor networking, energy monitoring via Zigbee devices, and of course Erica the Rhino (a Big Thing!).
The goal of the session is to briefly present such examples of existing Things in our lab with the intent of seeding discussion on open research questions, and therefore future work we could do towards new Things being deployed for experimentation in Building 32 or its environs. The session will discuss what 'things' we have, how they work, what new 'things' might we want to create and deploy, what components we might need to enable this, and how we might interact with these objects.
The web is inherently multimedia in nature, and contains data and information in many different audio, visual and textual forms. To fully understand the nature of the web and the information contained within it, it is necessary to harness all modalities of data. Within the EU funded ARCOMEM project, we are building a platform for crawling and analysing samples of web and social-web data at scale. Whilst the project is ostensibly about issues related to intelligent web-archiving, the ARCOMEM software has features that make it ideal for use as a platform for a scalable Multimedia Web Observatory.
This talk will describe the ARCOMEM approach from data harvesting through to detailed content analysis and demonstrate how this approach relates to a multimedia web observatory. In addition to describing the overall framework, I'll show some of the research aspects of the system related specifically to multimodal multimedia data in small (>100GB) to medium-scale (multi-terabyte) web archives, and demonstrate how these are targeted to our Parliamentarian and Journalist end-users.
Building a "street view" camera system and "Google Goggles" visual style building recognition system all tied up with linked data.
Southampton has a long history of research in the areas of multimedia information analysis. This talk will focus on some of the recent work we have been involved with in the area of image search. The talk will
start by looking at how image content can be represented in ways analogous to textual information and how techniques developed for indexing text can be adapted to images. In particular, the talk will introduce ImageTerrier, a research platform for image retrieval that is built around the University of Glasgow's Terrier text retrieval software. The talk will also cover some of our recent work on image classification and image search result diversification.
http://eprints.soton.ac.uk/273040/
OpenIMAJ and ImageTerrier are recently released open- source libraries and tools for experimentation and devel- opment of multimedia applications using Java-compatible programming languages. OpenIMAJ (the Open toolkit for Intelligent Multimedia Analysis in Java) is a collection of libraries for multimedia analysis. The image libraries con- tain methods for processing images and extracting state- of-the-art features, including SIFT. The video and audio libraries support both cross-platform capture and process- ing. The clustering and nearest-neighbour libraries contain efficient, multi-threaded implementations of clustering al- gorithms. The clustering library makes it possible to easily create BoVW representations for images and videos. OpenI- MAJ also incorporates a number of tools to enable extremely- large-scale multimedia analysis using distributed computing with Apache Hadoop. ImageTerrier is a scalable, high-performance search engine platform for content-based image retrieval applications using features extracted with the OpenIMAJ library and tools. The ImageTerrier platform provides a comprehensive test- bed for experimenting with image retrieval techniques. The platform incorporates a state-of-the-art implementation of the single-pass indexing technique for constructing inverted indexes and is capable of producing highly compressed index data structures.
http://eprints.soton.ac.uk/268496/
This paper proposes a new technique for auto-annotation and semantic retrieval based upon the idea of linearly mapping an image feature space to a keyword space. The new technique is compared to several related techniques, and a number of salient points about each of the techniques are discussed and contrasted. The paper also discusses how these techniques might actually scale to a real-world retrieval problem, and demonstrates this though a case study of a semantic retrieval technique being used on a real-world data-set (with a mix of annotated and unannotated images) from a picture library.
http://eprints.soton.ac.uk/352465/
The data contained within the web is inherently multimedia; consisting of a rich mix of textual, visual and audio modalities. Prospective Web Observatories need to take this into account from the ground up. This paper explores some uses for the automatic analysis of multimedia data within a Web Observatory, and describes a potential platform for an extensible and scalable multimedia Web Observatory.
http://eprints.soton.ac.uk/268168/
The diffusion of new Internet and web technologies has increased the distribution of different digital content, such as text, sounds, images and videos. In this paper we focus on images and their role in the analysis of diversity. We consider diversity as a concept that takes into account the wide variety of information sources, and their differences in perspective and viewpoint. We describe a number of different dimensions of diversity; in particular, we analyze the dimensions related to image searches and context analysis, emotions conveyed by images and opinion mining, and bias analysis.
http://eprints.soton.ac.uk/260954/
In this paper, we propose a model of automatic image annotation based on propagation of keywords. The model works on the premise that visually similar image content is likely to have similar semantic content. Image content is extracted using local descriptors at salient points within the image and quantising the feature-vectors into visual terms. The visual terms for each image are modelled using techniques taken from the information retrieval community. The modelled information from an unlabelled query image is compared to the models of a corpus of labelled images and labels are propagated from the most similar labelled images to the query image
http://eprints.soton.ac.uk/262870/
This paper presents a novel technique for learning the underlying structure that links visual observations with semantics. The technique, inspired by a text-retrieval technique known as cross-language latent semantic indexing uses linear algebra to learn the semantic structure linking image features and keywords from a training set of annotated images. This structure can then be applied to unannotated images, thus providing the ability to search the unannotated images based on keyword. This factorisation approach is shown to perform well, even when using only simple global image features.
http://eprints.soton.ac.uk/262737/
Semantic representation of multimedia information is vital for enabling the kind of multimedia search capabilities that professional searchers require. Manual annotation is often not possible because of the shear scale of the multimedia information that needs indexing. This paper explores the ways in which we are using both top-down, ontologically driven approaches and bottom-up, automatic-annotation approaches to provide retrieval facilities to users. We also discuss many of the current techniques that we are investigating to combine these top-down and bottom-up approaches.
http://eprints.soton.ac.uk/261887/
This paper attempts to review and characterise the problem of the semantic gap in image retrieval and the attempts being made to bridge it. In particular, we draw from our own experience in user queries, automatic annotation and ontological techniques. The first section of the paper describes a characterisation of the semantic gap as a hierarchy between the raw media and full semantic understanding of the media's content. The second section discusses real users' queries with respect to the semantic gap. The final sections of the paper describe our own experience in attempting to bridge the semantic gap. In particular we discuss our work on auto-annotation and semantic-space models of image retrieval in order to bridge the gap from the bottom up, and the use of ontologies, which capture more semantics than keyword object labels alone, as a technique for bridging the gap from the top down.
http://eprints.soton.ac.uk/260954/
In this paper, we propose a model of automatic image annotation based on propagation of keywords. The model works on the premise that visually similar image content is likely to have similar semantic content. Image content is extracted using local descriptors at salient points within the image and quantising the feature-vectors into visual terms. The visual terms for each image are modelled using techniques taken from the information retrieval community. The modelled information from an unlabelled query image is compared to the models of a corpus of labelled images and labels are propagated from the most similar labelled images to the query image
http://eprints.soton.ac.uk/260419/
This paper presents an investigation into the use of a mobile device as a novel interface to a content-based image retrieval system. The initial development has been based on the concept of using the mobile device in an art gallery for mining data about the exhibits, although a number of other applications are envisaged. The paper presents a novel methodology for performing content-based image retrieval and object recognition from query images that have been degraded by noise and subjected to transformations through the imaging system. The methodology uses techniques inspired from the information retrieval community in order to aid efficient indexing and retrieval. In particular, a vector-space model is used in the efficient indexing of each image, and a two-stage pruning/ranking procedure is used to determine the correct matching image. The retrieval algorithm is shown to outperform a number of existing algorithms when used with query images from the mobile device.
http://eprints.soton.ac.uk/258295/
In this paper, we introduce a novel technique for image matching and feature-based tracking. The technique is based on the idea of using the Scale-Saliency algorithm to pick a sparse number of ‘interesting’ or ‘salient’ features. Feature vectors for each of the salient regions are generated and used in the matching process. Due to the nature of the sparse representation of feature vectors generated by the technique, sub-image matching is also accomplished. We demonstrate the techniques robustness to geometric transformations in the query image and suggest that the technique would be suitable for view-based object recognition. We also apply the matching technique to the problem of feature tracking across multiple video frames by matching salient regions across frame pairs. We show that our tracking algorithm is able to explicitly extract the 3D motion vector of each salient region during the tracking process, using a single uncalibrated camera. We illustrate the functionality of our tracking algorithm by showing results from tracking a single salient region in near real-time with a live camera input.