The power consumption for big data

Random-access memory, or RAM, is where computers like to store the data they’re working on. A processor can retrieve data from RAM tens of thousands of times more rapidly than it can from the computer’s disk drive.

But in the age of big data, data sets are often much too large to fit in a single computer’s RAM. Sequencing data describing a single large genome could take up the RAM of somewhere between 40 and 100 typical computers.

Flash memory — the type of memory used by most portable devices — could provide an alternative to conventional RAM for big-data applications. It’s about a tenth as expensive, and it consumes about a tenth as much power.

The problem is that it’s also a tenth as fast. But at the International Symposium on Computer Architecture in June, MIT researchers presented a new system that, for several common big-data applications, should make servers using flash memory as efficient as those using conventional RAM, while preserving their power and cost savings.

The researchers also presented experimental evidence showing that, if the servers executing a distributed computation have to go to disk for data even 5 percent of the time, their performance falls to a level that’s comparable with flash, anyway.

In other words, even without the researchers’ new techniques for accelerating data retrieval from flash memory, 40 servers with 10 terabytes’ worth of RAM couldn’t handle a 10.5-terabyte computation any better than 20 servers with 20 terabytes’ worth of flash memory, which would consume only a fraction as much power.

“This is not a replacement for DRAM [dynamic RAM] or anything like that,” says Arvind, the Johnson Professor of Computer Science and Engineering at MIT, whose group performed the new work. “But there may be many applications that can take advantage of this new style of architecture. Which companies recognize: Everybody’s experimenting with different aspects of flash. We’re just trying to establish another point in the design space.”

Joining Arvind on the new paper are Sang Woo Jun and Ming Liu, MIT graduate students in computer science and engineering and joint first authors; their fellow grad student Shuotao Xu; Sungjin Lee, a postdoc in Arvind’s group; Myron King and Jamey Hicks, who did their PhDs with Arvind and were researchers at Quanta Computer when the new system was developed; and one of their colleagues from Quanta, John Ankcorn — who is also an MIT alumnus.

Outsourced computation

The researchers were able to make a network of flash-based servers competitive with a network of RAM-based servers by moving a little computational power off of the servers and onto the chips that control the flash drives. By preprocessing some of the data on the flash drives before passing it back to the servers, those chips can make distributed computation much more efficient. And since the preprocessing algorithms are wired into the chips, they dispense with the computational overhead associated with running an operating system, maintaining a file system, and the like.

Computer science to improve people

When senior Donald Little saw a need for better day-to-day organization in his fraternity house, he got to work on a Web application to address it. The app, called FratWorks, is just one example of how Little has improved people’s daily lives during his time at MIT. Whether he is developing useful apps or mentoring other students, the computer science and engineering major strives to connect with others and help them in ways big and small.

A broad perspective

Little credits his background with helping him gain a broad perspective on the world at a young age. Little’s family moved from Texas to Argentina when he was 2 and moved to Egypt when he was 13. In Egypt, he attended an international high school along with students from all over the world.

“There were a bunch of different cultures, a bunch of different languages, a bunch of different ideologies,” says Little. “But ultimately everyone worked together, so that opened up my mind as to how the world as a whole is a big place, but at the same time it is a small place.”

It was his experience in high school that taught Little the importance of connecting with people and keeping an open mind.

“Every time you met someone new, you learned a new cultural bit from wherever that person was from,” says Little. “That made me realize that you always learn something new from the people you meet, and the people you meet also shape you. And that kept encouraging me to meet new people.”

Helping others with computer science

When Little arrived at MIT, he was confident that he would major in either electrical engineering or mechanical engineering, but he soon realized that he enjoyed computer science even more. During an advanced Python course, Little, along with classmates Sami Alsheikh and Michael Handley, designed a computer game called “Balloon Boy,” where a boy with a balloon must move left and right to avoid falling nails. The game, which required the team to apply a lot of physics, showed Little the interdisciplinary nature of computer science and also taught him how much fun it can be.

“Because I saw a lot of my friends playing the game and having fun, I was like ‘hey, I think I can do something that other people are going to make use of and enjoy,’” he says. “And at the end of the day it’s also something that I enjoy doing.”

Since then, Little has focused on using his computer science skills to create programs that help people by improving or facilitating their lives. During his sophomore year, he noticed that many MIT students have difficulty navigating MIT’s numbering system for courses. Little responded by designing a plug-in that allowed students to click on a course number in a Facebook conversation, and see a pop-up window with the course name, description, and evaluations.

Along the same lines, when Little was frustrated that he could only access MIT’s printers from his laptop, he developed an app that allows students to print from their phones.

“Technology evolves very quickly and it’s always hard for institutions or groups to keep up,” says Little. “So creating these accessibility tools allows groups to catch up or see a glimpse of what’s possible with technology.”

Little’s most successful app to date is Ranger Dave Sent Me, which he developed with the goal of improving people’s experiences at music festivals. These festivals tend to last all day or all weekend long, and people often know only a few of the many artists performing. The app uses festival-goers’ individual preferences to help them build full schedules of music they might enjoy. The app won first place at the OutsideHacks hackathon and became the official app of the OutsideLands 2015 music and art festival, growing from zero to 18,000 users in 48 hours.

Finally, FratWorks is the task management system that Little created this past January to help the 50 brothers living in his fraternity house stay on top of their chores and divide the housework evenly. The app, which won first place in 6.148 (Independent Activities Period Programming Competition), allows people to sign up for tasks that need to be done, and sends frequent reminders. Little sees FratWorks as his way of giving back to his fraternity. He also thinks it has applications beyond fraternities and eventually hopes to release it to the public.

Engineering savvy to improve product designs

When Edward (Ned) Burnell sees a design problem, he is always ready to find a better solution. Even while chatting with a journalist outside his office, he points out ceilings and windows in different spaces and describes how he would improve them.

For Burnell, a master’s student in mechanical engineering, thinking about design is a way of life; whether he is creating a novel windmill or improving engineering software, he enjoys using his skills to figure out innovative solutions to familiar problems.

Growing up off the grid

Burnell’s unconventional way of looking at the world can be traced back to his upbringing in Northern California. Burnell describes his childhood home in Mendocino County as “off the grid,” located in a remote area that had only dirt roads at the time.

“The town I grew up in actually is not even a town,” he says. “There’s no municipal service besides the fire department. There is one gas station, one grocery store across from the post office, a Presbyterian church, and a K-3 school.”

Burnell was homeschooled by his parents until high school, when he began attending a nearby public school started by people in his community. His high school was different than most: Conventional desks and chairs were replaced by couches, and students could wander in and out of the main classroom in the center of school at any point during the day. Burnell enjoyed his school’s unusual culture and appreciated its focus on intrinsic motivation.

Finding his place at MIT

Burnell, who also has a BS in mechanical engineering from MIT, didn’t expect to attend such a large university, but during a visit he was immediately attracted to the vibe of the dorms in East Campus, such as the one where he lived previously, which he describes as wacky and a little bit punk.

“It’s in this old bunker that has really thick walls,” he says. “Everyone just paints all over everything. It’s like an old industrial district, safe because no one wants to mess with it.”

Growing up, Burnell hadn’t given much thought to what went into designing the products he saw around him — a finished bicycle, cup, or piece of furniture. As an undergraduate at MIT, however, he began thinking about the entire process of making products, and all of the decisions that lead to designing products in a particular way. This is what motivated him to pursue mechanical engineering.

“It was coming here and seeing, oh, there are actually all these ways in which these things do get made by people, and hey, I can be one of those people,” says Burnell.

As an undergraduate, Burnell also began thinking about how improving lesson design can help students learn more effectively. He traveled to Ghana, first as part of MIT’s D-Lab and later on a fellowship from MIT’s Priscilla King Gray Public Service Center, where he taught students hands-on engineering lessons. One particularly successful lesson involved guiding students as they built a battery to power an LED light, using basic materials such as copper wire, a bag of charcoal, a soda can, and salt water.

“These were very familiar objects,” he says. “We could’ve, on the walk to school, gotten literally all the products, and the fact that they did this really unexpected thing [made the students] really curious — students were in disbelief.”

Burnell also worked on creating an informal, interactive classroom environment where students were presented with open-ended projects that encouraged them to explore and ask questions, something they hadn’t previously experienced in their more traditional, fact-based British educational system.

Acquire The Proper Assistance To Ensure You Can Discover The Proper Staff Members

Employers have to make sure they could discover the correct help anytime they will have to have it and, frequently, this is going to be hard to accomplish. No matter whether it’s since they don’t really realize precisely where to start looking in order to find the correct staff members or perhaps they aren’t certain precisely how to carryout a job interview, it really is crucial they get the assistance they’ll need. An employer who would like to make sure they are able to locate the right employees and also make sure they’ll realize just how to interview possible staff members will almost certainly need to work along with one of the executive recruiters tampa to be able to acquire all of the assistance they’ll need.

One benefit of working along with recruiting firms tampa is actually the capability to uncover accomplished men and women very easily. The recruiting firm understands precisely where to search in order to locate possible candidates for the job as well as can help the employer make certain they locate the appropriate one as quickly as possible. This isn’t simple for the employer to accomplish by themselves as they may well not recognize where to look or even exactly what to seek out. The recruiter has experience in this area as well as could help the employer locate just who they may be trying to find to allow them to fill a position as fast as possible to be able to allow their particular organization to continue to flourish and also develop.

In addition to helping them discover potential workers, the headhunters tampa can furthermore help them learn how to interview possible employees. That is a beneficial tool that can assist them to make sure the people they may be hiring will be suitable for the task. This isn’t something that comes naturally to numerous business employers and that is okay when they have the opportunity to find out precisely what they should and shouldn’t do whenever they may be choosing a possible staff member and determining if they’ll want to retain an individual. Having this kind of aid could ensure they’re going to put together a fantastic crew to help the organization grow.

If perhaps you’re looking for help discovering or perhaps interviewing prospective workers, working with one of the executive search firms tampa can really help. Check out their site now in order to learn a lot more with regards to precisely how they could aid you and precisely how this aid can truly make a difference for your company. You might be pleased you called them when you’re going to start to see the real difference it will make for your business.

Professors choice modeling software predicts

U.S. retail chains often rely on intuition in choosing which products, from a vast inventory, will sell best at stores across the nation. Now MIT spinout Celect is refining this process with novel data analytics, revealing interesting insights into how retailers can optimize their shelf space.

Co-founded by MIT professors Vivek Farias and Devavrat Shah, Celect develops software that crunches a store’s sales and inventory data — and, sometimes, online buying data — to determine which products local customers want to buy.

Powered by the professors’ algorithms for improving Netflix’s recommendation engine and predicting trending Twitter topics, the software compares items located near each other in an individual store, and statistically finds which will sell better, based on sales records. Analyzed at scale — over thousands or millions of product comparisons — this shows the buying preferences of customers, as a population.

“We basically create a bag of comparisons and convert that into a black box … known as our customer-choice engine,” says Shah, an associate professor of electrical engineering and computer science, and chief science officer at Celect.

Retailers then plug budget, shipping, and other parameters into the software’s interface. The software takes these into consideration, while analyzing the assortment and prices of items in the store, shelf space, and other parameters to find the optimal mix of items to stock. Results will indicate how many items will sell, and the overall profit: An expensive item with a low likelihood to sell, for instance, may be marginally better for the bottom line than a low-cost top seller.

“It’s very complicated behind the scenes. But at the end of the day it’s doing something very simple: tracking who wants to buy what, and matching the product with the person,” says Farias, the Robert N. Noyce Career Development Associate Professor at the MIT Sloan School of Management and Celect’s chief technology officer.

Insightful data

Launched in 2012, Celect now has eight big-name retail clients, some with several hundred stores across the nation. Last month, it raised its first round of $5 million from venture capitalists.

So far, Celect’s software has provided some interesting insights into why products may or may not move off the shelves.

Recently, a Midwest retailer using Celect learned that it needed to stock, of all things, church hats. Using Celect software, the retailer’s analysts noticed people buying soccer shorts and certain watches in some stores were also buying fancy hats for church. Church hats weren’t on sale at one particular store, so the chain stocked those hats, which sold rapidly. Now, the retailer plans to incorporate the software across its 270 branches nationwide.

This type of insight is possible through comparisons between sales data in different stores, Shah explains. Say, for example, store A and store B stock many similar items, but store A stocks one extra item not found in store B. Celect may find customers have very similar buying patterns in stores A and B, but are also buying that extra item in store A.

“We can extrapolate that people in store A will have similar buying preferences on that absent product,” Shah says. “If the connection is strong enough, we recommend that missing item be stocked.”

More recently, another retailer using Celect assumed that because a certain article of clothing was old, it sold less than newer clothing. But Celect’s data analysis revealed that this was not the case: The popularity of the item depended entirely on its color. The popular colors were grabbed up quickly, while the less popular colors remained on the shelves for months.

“Those kind of features just pop out of the data,” Farias says.

World of choices

Celect’s core technology traces back a decade, to when Farias and Shah sought to improve recommendation engines used by Pandora, Netflix, and other online services by predicting preferences based on paired comparisons of movies and products.

In a series of papers from 2008 to 2011, Farias and Shah described an algorithm that used this choice model, as opposed to, say, a five-star scale used by Netflix to better predict people’s preferences. In 2012, they used the algorithm to predict trending topics on Twitter topics with 95 percent accuracy, up to four or five hours before they trended.

But a case study published in 2011 in Management Science demonstrated the algorithm’s potential for real-world, commercial application by better predicting car-buying preferences.

New model predicts wind speeds more accurately

When a power company wants to build a new wind farm, it generally hires a consultant to make wind speed measurements at the proposed site for eight to 12 months. Those measurements are correlated with historical data and used to assess the site’s power-generation capacity.

At the International Joint Conference on Artificial Intelligence later this month, MIT researchers will present a new statistical technique that yields better wind-speed predictions than existing techniques do — even when it uses only three months’ worth of data. That could save power companies time and money, particularly in the evaluation of sites for offshore wind farms, where maintaining measurement stations is particularly costly.

“We talked with people in the wind industry, and we found that they were using a very, very simplistic mechanism to estimate the wind resource at a site,” says Kalyan Veeramachaneni, a research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and first author on the new paper. In particular, Veeramachaneni says, standard practice in the industry is to model correlations in wind-speed data using a so-called Gaussian distribution — the “bell curve” familiar from basic statistics.

“The data here is non-Gaussian; we all know that,” Veeramachaneni says. “You can fit a bell curve to it, but that’s not an accurate representation of the data.”

Typically, a wind energy consultant will find correlations between wind speed measurements at a proposed site and those made, during the same period, at a nearby weather station where records stretch back for decades. On the basis of those correlations, the consultant will adjust the weather station’s historical data to provide an approximation of wind speeds at the new site.

The correlation model is what’s known in statistics as a joint distribution. That means that it represents the probability not only of a particular measurement at one site, but of that measurement’s coincidence with a particular measurement at the other. Wind-industry consultants, Veeramachaneni says, usually characterize that joint distribution as a Gaussian distribution.

Different curves

The first novelty of the model that Veeramachaneni developed with his colleagues — Una-May O’Reilly, a principal research scientist at CSAIL, and Alfredo Cuesta-Infante of the Universidad Rey Juan Carlos in Madrid — is that it can factor in data from more than one weather station. In some of their analyses, the researchers used data from 15 or more other sites.

But its main advantage is that it’s not restricted to Gaussian probability distributions. Moreover, it can use different types of distributions to characterize data from different sites, and it can combine them in different ways. It can even use so-called nonparametric distributions, in which the data are described not by a mathematical function, but by a collection of samples, much the way a digital music file consists of discrete samples of a continuous sound wave.

Another aspect of the model is that it can find nonlinear correlations between data sets. Standard regression analysis, of the type commonly used in the wind industry, identifies the straight line that best approximates a scattering of data points, according to some distance measure. But often, a curved line would offer a better approximation. The researchers’ model allows for that possibility.

Validation

The researchers first applied their technique to data collected from an anemometer on top of the MIT Museum, which was looking to install a wind turbine on its roof. Once they had evidence of their model’s accuracy, they applied it to data provided to them by a major consultant in the wind industry.

With only three months of the company’s historical data for a particular wind farm site, Veeramachaneni and his colleagues were able to predict wind speeds over the next two years three times as accurately as existing models could with eight months of data. Since then, the researchers have improved their model by evaluating alternative ways of calculating joint distributions. According to additional analysis of the data from the Museum of Science, which is reported in the new paper, their revised approach could double the accuracy of their predictions.

Professor of electrical engineering and computer science

Guy Bresler joined the MIT faculty in September 2015 as the Bonnie and Marty (1964) Tenenbaum Career Development Professor in the Department of Electrical Engineering and Computer Science (EECS). He also joined the Institute for Data, Systems, and Society (IDSS) — which addresses complex societal challenges by advancing education and research at the intersection of statistics, data science, information and decision systems, and social sciences — as a member of the Laboratory for Information and Decision Systems (LIDS).

Bresler’s research investigates the relationship between combinatorial structure and computational tractability of high-dimensional inference in graphical models and other statistical models. His current work focuses on learning graphical models from data, and explores how both data and computation requirements can be reduced if the model is subsequently used for a specific inference task. Bresler is also interested in applications of these methods, especially to recommendation systems and computational biology. 

Bresler spoke with IDSS about some of his work and his perspective on being part of both LIDS and IDSS.

Q: You describe your research interests in largely theoretical terms — combinatorial structure, computational tractability, high-dimensional inference. How would you explain your work to those not in the field?

A: The basic question in most of machine learning and much of statistics is: How do you come up with a good model for some phenomenon you’re observing? There are a lot of models out there and a lot of complicated phenomena. Learning more complicated models generally requires more data. As I get more data, can I get a better sense of what’s a good model within a given class?

The problem with very complex models is that learning requires too much data. The idea is that if you have some intuition about what the right model is for what you’re observing, you can narrow things down. One type of model I’ve been interested in is graphical models. The concept of “graph” is pretty basic to a lot of the things I’m interested in. In a graphical model you have a bunch of points representing variables and some of them interact so you draw an edge, or line, between them, and some of them aren’t connected. When learning the model, the goal is to understand what’s influencing what. Such graphs can represent many things, among them social networks, for example, or gene regulatory networks.

Q. It sounds like you do a lot of theoretical work that you can then apply to different things. Are you working on particular applications right now?

A: I’m interested in several different application domains. One application domain I’m getting more involved in now is genomics. You want to predict, for example: If I knock out this gene in your DNA sequence, what effect is that going to have on the expression of these other genes? In order to make such a prediction, one can first learn a model for how these genes are interacting, and that’s done from data. Certain types of graphical models seem fairly well suited to this problem.

The basic causal prediction question comes up everywhere, and the hope is to come up with statistical methodology that is as universally applicable as possible. It’s useful to try and focus on one application at first, and actually validate things experimentally. Part of this genomics work we’re just starting now with Caroline Uhler [in EECS], Philippe Rigollet [in the Department of Mathematics], Jon Kelner [in the Department of Mathematics], and Aviv Regev [in the Department of Biology]. Aviv Regev’s lab can do single-cell experiments where we can knock out genes or change the expression of the one gene. So we can take it full cycle: learn models from a bunch of data that only she can produce for us using statistical methodology that we’ll develop, and then we can then go validate to see: Did we do a good job?

Q: Why did you choose to make LIDS and IDSS your intellectual home?

A: LIDS is a wonderful place where people have a lot of freedom to think about challenging and interesting problems. It has this great combination of scholarship coupled with drive and curiosity; a great mix of theoretical work with engineering and systems motivation. It’s definitely a place where I feel really happy.

And IDSS — It’s super exciting what’s happening. There’s a steadily growing critical mass of people working on related questions, which creates a lot of energy. This builds on MIT’s heritage and strength in areas such as computation and control theory. For instance, [IDSS faculty member] Philippe Rigollet’s work has helped change the way we think about some statistical problems and how they interplay with computational questions. To me certainly that’s one of the most exciting things: how much interaction there is already and I think will continue to be between statistics and other fields.

Q. Is the policy and social science aspect of IDSS of interest to you?

A: I think having social science be part of IDSS is fantastic. If everybody is a theoretician it’s possible to get a bit detached from the real world, so it’s crucial to have domain areas where we want to make a big impact and IDSS is doing this. But even purely from the theoretical point of view, many interesting theoretical questions are motivated by practical constraints.

So I think that in terms of having impact on the world, there’s a lot of potential to do that with the right mix of people at IDSS. It’s something that may be more difficult within a single department; it’s harder to bring together that group of people from different backgrounds. Trying to have a big positive impact on the world, it’s awesome to have this as one of the driving visions of IDSS.

Cryptographic system would allow users to decide which applications access which aspects of their data

Most people with smartphones use a range of applications that collect personal information and store it on Internet-connected servers — and from their desktop or laptop computers, they connect to Web services that do the same. Some use still other Internet-connected devices, such as thermostats or fitness monitors, that also store personal data online.

Generally, users have no idea which data items their apps are collecting, where they’re stored, and whether they’re stored securely. Researchers at MIT and Harvard University hope to change that, with an application they’re calling Sieve.

With Sieve, a Web user would store all of his or her personal data, in encrypted form, on the cloud. Any app that wanted to use specific data items would send a request to the user and receive a secret key that decrypted only those items. If the user wanted to revoke the app’s access, Sieve would re-encrypt the data with a new key.

“This is a rethinking of the Web infrastructure,” says Frank Wang, a PhD student in electrical engineering and computer science and one of the system’s designers. “Maybe it’s better that one person manages all their data. There’s one type of security and not 10 types of security. We’re trying to present an alternative model that would be beneficial to both users and applications.”

The researchers are presenting Sieve at the USENIX Symposium on Networked Systems Design and Implementation this month. Wang is the first author, and he’s joined by MIT associate professors of electrical engineering and computer science Nickolai Zeldovich and Vinod Vaikuntanathan, who is MIT’s Steven and Renee Finn Career Development Professor, and by James Mickens, an associate professor of computer science at Harvard University.

Selective disclosure

Sieve required the researchers to develop practical versions of two cutting-edge cryptographic techniques called attribute-based encryption and key homomorphism.With attribute-based encryption, data items in a file are assigned different labels, or “attributes.” After encryption, secret keys can be generated that unlock only particular combinations of attributes: name and zip code but not street name, for instance, or zip code and date of birth but not name.

The problem with attribute-based encryption — and decryption — is that it’s slow. To get around that, the MIT and Harvard researchers envision that Sieve users would lump certain types of data together under a single attribute. For instance, a doctor might be interested in data from a patient’s fitness-tracking device but probably not in the details of a single afternoon’s run. The user might choose to group fitness data by month.

This introduces problems of its own, however. A fitness-tracking device probably wants to store data online as soon as the data is generated, rather than waiting until the end of the month for a bulk upload. But data uploaded to the cloud yesterday could end up in a very different physical location than data uploaded by the same device today.

So Sieve includes tables that track the locations at which grouped data items are stored in the cloud. Each of those tables is encrypted under a single attribute, but the data they point to are encrypted using standard — and more efficient — encryption algorithms. As a consequence, the size of the data item encrypted through attribute-based encryption — the table — is fixed, which makes decryption more efficient.

In experiments, the researchers found that decrypting a month’s worth of, say, daily running times grouped under a single attribute would take about 1.5 seconds, whereas if each day’s result was encrypted under its own attribute, decrypting a month’s worth would take 15 seconds.

Wang developed an interface that displays a Sieve user’s data items as a list and allows the user to create and label icons that represent different attributes. Dragging a data item onto an icon assigns it that attribute. At the moment, the interface is not particularly user friendly, but its purpose is to show that the underlying encryption machinery works properly.

Blind manipulation

Key homomorphism is what enables Sieve to revoke an app’s access to a user’s data. With key homomorphism, the cloud server can re-encrypt the data it’s storing without decrypting it first — or without sending it to the user for decryption, re-encryption, and re-uploading. In this case, the researchers had to turn work that was largely theoretical into a working system.

Language app makes meal logging easier

For people struggling with obesity, logging calorie counts and other nutritional information at every meal is a proven way to lose weight. The technique does require consistency and accuracy, however, and when it fails, it’s usually because people don’t have the time to find and record all the information they need.

A few years ago, a team of nutritionists from Tufts University who had been experimenting with mobile-phone apps for recording caloric intake approached members of the Spoken Language Systems Group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), with the idea of a spoken-language application that would make meal logging even easier.

This week, at the International Conference on Acoustics, Speech, and Signal Processing in Shanghai, the MIT researchers are presenting a Web-based prototype of their speech-controlled nutrition-logging system.

With it, the user verbally describes the contents of a meal, and the system parses the description and automatically retrieves the pertinent nutritional data from an online database maintained by the U.S. Department of Agriculture (USDA).

The data is displayed together with images of the corresponding foods and pull-down menus that allow the user to refine their descriptions — selecting, for instance, precise quantities of food. But those refinements can also be made verbally. A user who begins by saying, “For breakfast, I had a bowl of oatmeal, bananas, and a glass of orange juice” can then make the amendment, “I had half a banana,” and the system will update the data it displays about bananas while leaving the rest unchanged.

“What [the Tufts nutritionists] have experienced is that the apps that were out there to help people try to log meals tended to be a little tedious, and therefore people didn’t keep up with them,” says James Glass, a senior research scientist at CSAIL, who leads the Spoken Language Systems Group. “So they were looking for ways that were accurate and easy to input information.”

The first author on the new paper is Mandy Korpusik, an MIT graduate student in electrical engineering and computer science. She’s joined by Glass, who’s her thesis advisor; her fellow graduate student Michael Price; and by Calvin Huang, an undergraduate researcher in Glass’s group.

Context sensitivity

In the paper, the researchers report the results of experiments with a speech-recognition system that they developed specifically to handle food-related terminology. But that wasn’t the main focus of their work; indeed, an online demo of their meal-logging system instead uses Google’s free speech-recognition app.

Their research concentrated on two other problems. One is identifying words’ functional role: The system needs to recognize that if the user records the phrase “bowl of oatmeal,” nutritional information on oatmeal is pertinent, but if the phrase is “oatmeal cookie,” it’s not.

The other problem is reconciling the user’s phrasing with the entries in the USDA database. For instance, the USDA data on oatmeal is recorded under the heading “oats”; the word “oatmeal” shows up nowhere in the entry.

To address the first problem, the researchers used machine learning. Through the Amazon Mechanical Turk crowdsourcing platform, they recruited workers who simply described what they’d eaten at recent meals, then labeled the pertinent words in the description as names of foods, quantities, brand names, or modifiers of the food names. In “bowl of oatmeal,” “bowl” is a quantity and “oatmeal” is a food, but in “oatmeal cookie,” oatmeal is a modifier.

Artificial intelligence produces realistic sounds

For robots to navigate the world, they need to be able to make reasonable assumptions about their surroundings and what might happen during a sequence of events.

One way that humans come to learn these things is through sound. For infants, poking and prodding objects is not just fun; some studies suggest that it’s actually how they develop an intuitive theory of physics. Could it be that we can get machines to learn the same way?

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have demonstrated an algorithm that has effectively learned how to predict sound: When shown a silent video clip of an object being hit, the algorithm can produce a sound for the hit that is realistic enough to fool human viewers.

This “Turing Test for sound” represents much more than just a clever computer trick: Researchers envision future versions of similar algorithms being used to automatically produce sound effects for movies and TV shows, as well as to help robots better understand objects’ properties.

“When you run your finger across a wine glass, the sound it makes reflects how much liquid is in it,” says CSAIL PhD student Andrew Owens, who was lead author on an upcoming paper describing the work. “An algorithm that simulates such sounds can reveal key information about objects’ shapes and material types, as well as the force and motion of their interactions with the world.”

The team used techniques from the field of “deep learning,” which involves teaching computers to sift through huge amounts of data to find patterns on their own. Deep learning approaches are especially useful because they free computer scientists from having to hand-design algorithms and supervise their progress.

The paper’s co-authors include recent PhD graduate Phillip Isola and MIT professors Edward Adelson, Bill Freeman, Josh McDermott, and Antonio Torralba. The paper will be presented later this month at the annual conference on Computer Vision and Pattern Recognition (CVPR) in Las Vegas.

How it works

The first step to training a sound-producing algorithm is to give it sounds to study. Over several months, the researchers recorded roughly 1,000 videos of an estimated 46,000 sounds that represent various objects being hit, scraped, and prodded with a drumstick. (They used a drumstick because it provided a consistent way to produce a sound.)

Next, the team fed those videos to a deep-learning algorithm that deconstructed the sounds and analyzed their pitch, loudness and other features.

“To then predict the sound of a new video, the algorithm looks at the sound properties of each frame of that video, and matches them to the most similar sounds in the database,” says Owens. “Once the system has those bits of audio, it stitches them together to create one coherent sound.”

The result is that the algorithm can accurately simulate the subtleties of different hits, from the staccato taps of a rock to the longer waveforms of rustling ivy. Pitch is no problem either, as it can synthesize hit-sounds ranging from the low-pitched “thuds” of a  soft couch to the high-pitched “clicks” of a hard wood railing.

“Current approaches in AI only focus on one of the five sense modalities, with vision researchers using images, speech researchers using audio, and so on,” says Abhinav Gupta, an assistant professor of robotics at Carnegie Mellon University who was not involved in the study. “This paper is a step in the right direction to mimic learning the way humans do, by integrating sound and sight.”

New algorithm could stitch together astronomical measurements

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory, the Harvard-Smithsonian Center for Astrophysics, and the MIT Haystack Observatory have developed a new algorithm that could help astronomers produce the first image of a black hole.

The algorithm would stitch together data collected from radio telescopes scattered around the globe, under the auspices of an international collaboration called the Event Horizon Telescope. The project seeks, essentially, to turn the entire planet into a large radio telescope dish.

“Radio wavelengths come with a lot of advantages,” says Katie Bouman, an MIT graduate student in electrical engineering and computer science, who led the development of the new algorithm. “Just like how radio frequencies will go through walls, they pierce through galactic dust. We would never be able to see into the center of our galaxy in visible wavelengths because there’s too much stuff in between.”

But because of their long wavelengths, radio waves also require large antenna dishes. The largest single radio-telescope dish in the world has a diameter of 1,000 feet, but an image it produced of the moon, for example, would be blurrier than the image seen through an ordinary backyard optical telescope.

“A black hole is very, very far away and very compact,” Bouman says. “[Taking a picture of the black hole in the center of the Milky Way galaxy is] equivalent to taking an image of a grapefruit on the moon, but with a radio telescope. To image something this small means that we would need a telescope with a 10,000-kilometer diameter, which is not practical, because the diameter of the Earth is not even 13,000 kilometers.”

The solution adopted by the Event Horizon Telescope project is to coordinate measurements performed by radio telescopes at widely divergent locations. Currently, six observatories have signed up to join the project, with more likely to follow.

But even twice that many telescopes would leave large gaps in the data as they approximate a 10,000-kilometer-wide antenna. Filling in those gaps is the purpose of algorithms like Bouman’s.

Bouman will present her new algorithm — which she calls CHIRP, for Continuous High-resolution Image Reconstruction using Patch priors — at the Computer Vision and Pattern Recognition conference in June. She’s joined on the conference paper by her advisor, professor of electrical engineering and computer science Bill Freeman, and by colleagues at MIT’s Haystack Observatory and the Harvard-Smithsonian Center for Astrophysics, including Sheperd Doeleman, director of the Event Horizon Telescope project.

Hidden delays

The Event Horizon Telescope uses a technique called interferometry, which combines the signals detected by pairs of telescopes, so that the signals interfere with each other. Indeed, CHIRP could be applied to any imaging system that uses radio interferometry.

Usually, an astronomical signal will reach any two telescopes at slightly different times. Accounting for that difference is essential to extracting visual information from the signal, but the Earth’s atmosphere can also slow radio waves down, exaggerating differences in arrival time and throwing off the calculation on which interferometric imaging depends.

Bouman adopted a clever algebraic solution to this problem: If the measurements from three telescopes are multiplied, the extra delays caused by atmospheric noise cancel each other out. This does mean that each new measurement requires data from three telescopes, not just two, but the increase in precision makes up for the loss of information.

Preserving continuity

Even with atmospheric noise filtered out, the measurements from just a handful of telescopes scattered around the globe are pretty sparse; any number of possible images could fit the data equally well. So the next step is to assemble an image that both fits the data and meets certain expectations about what images look like. Bouman and her colleagues made contributions on that front, too.