Press/Studies Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations (Feb 2016)

electrolyte

The Ghost of MTurk Past
Contributor
Joined
Jan 10, 2016
Messages
19,184
Reaction score
45,978
Points
1,313
"In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifi- cally, our dataset contains over 100K images where each image has an average of 21 objects, 18 attributes, and 18 pairwise relationships between objects... Visual Genome was collected and verified entirely by crowd workers from Amazon Mechanical Turk."

http://visualgenome.org/static/paper/Visual_Genome.pdf

Also of interest: https://twitter.com/visualgenome
 

clickhappier

┬──┬ ノ( ゜-゜ノ)
Subforum Curator
Crowd Pleaser
Joined
Jan 12, 2016
Messages
728
Reaction score
1,634
Points
593
Location
USA
For reference: This paper is about the MTurk requester Visual Genome's project. There isn't a date in the paper's text currently, just placeholders for after a journal accepts and publishes it, but it was apparently just posted a few days ago; the file is dated Feb 23, 2016 in its properties. Their webpage http://visualgenome.org/paper also has some other earlier papers that referred to their work.
 
  • Like
Reactions: Dutchy