AI Art Reimagines Seattle
Pedestrianize pike Place Market. It's the not-so-hot take fired off everywhere from Urbanist Twitter threads to Seattle Times editorial board meetings.
But what would one of our city's sacred landmarks actually look like without cars (and the drivers cursing their GPS directions)?
A few weeks ago, one viral rendering surfaced not from the hand of a human, but from the burgeoning imagination of artificial intelligence. The image posted by an open streets advocate depicted a decidedly shadier and more inviting road next to the market.
Pike Place Market (Seattle, Washington) pic.twitter.com/gD7a0bO7wm
— AI-generated street transformations (@betterstreetsai) July 30, 2022
This, it should be stressed, is probably one of the more practical deployments of an AI art generator. Often Dall-E Mini images look entertainingly nightmarish (type "Pete Carroll fishmonger" if you don't believe me). Other times, they're boring or miss the mark. The "Seattle freeze" is lost on AI.
But people can't get enough of these meme machines and their capacity, in seconds, to reimagine familiar sights. The "Seattle is dying" crowd might appreciate this skyline hellscape from a Midjourney user. A mash-up of a Space Needle picture and galactic watercolor painting produced this beauty. Dall-E recently fashioned a Banksy-like take on the Seahawks logo.
While some of these generators' creations are flat-out alarming, many are alarmingly accurate. "Dall-E was very surprising to a lot of us," says Tanmay Gupta, a research scientist at the Allen Institute for AI.
Founded in 2014 by the late Paul Allen, the nonprofit on Lake Union has studied and driven advancements in AI for nearly a decade now. Gupta is on the PRIOR, or Perceptual Reasoning and Interaction Research, team, which has examined the relationship between text and visuals for years. Gupta recalls working with Flintstones cartoons a few years ago to create AI videos. "You would say something like, 'Fred is sitting on the couch, next to Wilma, who is reading the newspaper.' And then this model would go and create a scene where these things were actually happening."
But it wasn't until Dall-E Mini's arrival about a year ago that AI visuals hit the mainstream and put the field on notice. "Not only was the image generation already surprising, but also the fact that it was good at composing things that are very unique—for example, like a person in a spacesuit, sitting on a horse, on the moon." Some colleagues tested the new generator with meta prompts, like "A computer that can see and understand everything" and "A humanoid robot agent laying helplessly on the ground of a home." Dall-E handled them with ease.
Which raises all sorts of concerns, Gupta acknowledges. As AI videos improve, how will we distinguish a politician's real speech from a deepfake? How do we compensate the artists from whom AI learns how to conjure, say, a Banksy-like piece? And how does the technology avoid mimicking the worst of humanity? "While the capabilities of image generation models are impressive, they may also reinforce or exacerbate societal biases," Dall-E's site says in a disclaimer. "While the extent and nature of the biases of the Dall-E Mini model have yet to be fully documented, given the fact that the model was trained on unfiltered data from the internet, it may generate images that contain stereotypes against minority groups."
In the short term, Gupta says researchers are looking now at more ways for users, often artists, to control and edit these visuals. In the meantime, AI will continue to run wild.
Bits and bytes. The Dan Price bombshell. Bill Gates's TerraPower raises $750 million. A pet dating app, Offleash'd, launches in Seattle.