SciTech Roundup 10/3
CMU RESEARCH ROUNDUP
Carnegie Mellon will begin construction on The Richard King Mellon Hall of Sciences, a new building funded by a $75 million grant. It will house classrooms, labs, and other work areas. These spaces are primarily for Biology, Chemistry departments and the Neuroscience Institute from the Mellon College of Science, as well as Computational Biology, Machine Learning departments and the Language Technologies Institute from the School of Computer Science. The Miller Institute for Contemporary Art will also relocate to this building from its current location in the Purnell Center for the Arts, doubling in gallery space and accommodating five times more visitors.
Many of us are used to seeing UPS and Amazon Prime trucks every week in the neighborhood. However, these vehicles can consume large amounts of energy as they travel between the warehouse to your house to fulfill your "one-day delivery" promise. This type of delivery is called last-mile delivery, and as consumer demands increase, so does energy usage and environmental impact of last-mile delivery.
But what about delivery drones? Researchers in Carnegie Mellon's civil and environmental engineering department have investigated the possibility of using drones instead of trucks for last-mile deliveries to reduce energy consumption. The team periodically interacted with organizations like Amazon, Pittsburgh Region Clean Cities, and the City of Pittsburgh to ensure that the experiment mirrored real-world conditions.
In their paper in the August issue of "Patterns," the researchers reported that drones use 94 percent less energy per package than other vehicles. How much energy the drone used depended on payload mass, or the weight of the package, and delivery distance, so the most cost-effective deliveries were for materials that were light but valuable, such as medical supplies. Surprisingly, they found that wind and drone speed seemed to have little effect on energy consumption.
In practicality, drones work best with smaller packages, so larger packages would still need to be delivered via other methods. E-cargo bikes currently provide a good alternative while similarly not draining too much energy, and delivery robots are currently being developed as well, though their energy efficiency still depends greatly on the manufacturer.
The latest in AI-generated art
While garnering differing opinions all around, AI-generated art is definitely on the hype train, making more advances and becoming more readily available to the public. Most of the recent hype comes from Craiyon or DALL-E mini, a publicly available copycat of OpenAI's DALL-E that similarly generates art from text prompts. (It was renamed Craiyon after complaints that DALL-E mini wasn't affiliated with DALL-E or OpenAI.) While Craiyon is considerably underpowered compared to OpenAI DALL-E, generating warped images and motifs, it did not stop the Internet from using it to generate up to 50,000 images each day of everything from politicians to aliens.
Now, there's even more publicly available AI art generators: on Sept. 28, OpenAI made their latest version of DALL-E, DALL-E 2, available to the public. This can at least expose users to a more moderated, more secure AI-art generator that even comes with a Risks and Limitations document explaining all of its potential issues, such as reinforcement of stereotypes. This newly public DALL-E 2 also comes with Outpainting, a feature added in August that allows users to expand an image or piece of art beyond its borders in a similar art style.
Almost as popular as DALL-E 2 is Stable Diffusion, launched on Sept. 10 and made public on Sept. 22. It similarly generates art from text-based prompts. Another platform is Midjourney, which launched its open beta on July 12 and can be operated using Discord, where you send a text prompt and receive a set of images based on it through a bot.
The popularity of DALL-E 2 and its spinoffs seem to have inspired bigger tech companies to dip their feet into text-based generative art: Meta released Make-A-Video on Sept. 29, a tool that generates videos based on text prompts. Meta previously released a paper on Make-A-Scene, which generates art based on text input and a sketch input. Google also announced Imagen in May, a text-based art generator that they claim produces images rated as higher quality by humans than DALL-E 2's images. Google also released DreamFusion recently, which generates 3D models based on text prompts. Few of these, other than DreamFusion, are currently available to the public.
There's still the question of how they get the data to generate their models because AI can only learn art from other people's art. Much of the data is not taken with consent from the authors, usually web-scraped from Google, Pinterest, and blog platforms like Wordpress, as was the case for Stable Diffusion. OpenAI has yet to release details on the hundreds of millions of images used for DALL-E 2, drawn from both publicly available sources and licensed sources. Companies like Meta and Google have also taken data from publicly available datasets made explicitly for non-commercial use, though it's doubtful Meta and Google are really using them for non-commercial uses. It's not that these companies aren't involved in the data collection however: they also heavily fund the organizations collecting the data. So the maneuvering appears to be a way to shift accountability from the companies to the smaller organizations they use to collect their data.