MOUNTAIN VIEW, Calif. — Industry veterans laid out four grand challenges in engineering at a National Academy of Engineering event here at Computer History Museum—preparing the next generation of engineers, creating a more interactive medium than HTML, developing truly secure systems for the Internet of Things and responding to the bandwidth crisis in big data centers.
Engineers should spend time in K–8 classrooms as a service project, “like coaching little league — we have to grow an engineering culture,” said Alan Kay, a former fellow at Apple, Disney and Hewlett Packard. “Our notion of service has to be directed toward the next generation…the kids can’t wait,” said Kay who helped pioneer the concept of the personal computer at Xerox PARC in the 1970’s.
However, he was quick to note it’s hard to find the right balance of engineering and teaching skills and effective techniques to engage young students. For example, massive open online courses, a current rage at the university level, “are terrible,” and “you have to have some insights to use” the Internet “as something other than a legal drug,” he quipped.
“Almost nothing” will come to pass from a 2002 initiative to define the needs of the engineer in 2020, said Kay who participated in the effort. He called for engineers to think big to define bold directions.
“Why can’t engineering re-engineer itself?” he asked. “We need an imagination amplifier that lets us respond ahead of time” to unanticipated challenges such as climate change, he said.
Bret Victor, a user interface expert who worked on the iPad at Apple, stunned attendees with a preview of a smart, interactive interface. For example, a description of a filter (above) lets users make changes in a related circuit diagram, formula and graph to see their implications across all three.
“This shows how the graph, equation and circuit diagram dance together. It gives you an intimacy with the system that you can’t get any other way,” he said, moving elements around in a demo.
A handful of engineers at Google are trying to use these techniques in a new publication they launched dedicated to machine learning. They may have “put in something like 100 hours on each article, so we have got to get over a hump,” said Victor, suggesting the technique may spawn a new class of programming-savvy graphics artists.
Nevertheless, the technique hold promise for richer, interactive experiences.
“We’ve gotten good at letting computers move bits around fast, but what bits represent are still things like paper that we are used to. We are using the internet as a giant fax machine,” Victor said.
Moderator Vint Cerf characterized the work as opening the door to a new kind of literacy. However, he noted the archival challenge that “the underlying software still has to work in 100 years.”
Others suggested the effort is part of a broad move from an era of information to one of experiences. Some cited work on large tiled panels as another attempt to deliver richer experiences.
Next page: Dealing with IoT’s insecurities
Security expert Peter Neumann discussed the government project he works on that aims to pave a road to provably secure systems. He is a principal investigator for the Defense Advanced Research Projects Agency on CRASH (Clean-Slate Design of Resilient, Adaptive, Secure Hosts), a program that aims to build self-healing systems resistant to cyber attacks.
Such systems are sorely needed. Even today’s devices using a hardware root-of-trust such as ARM’s TrustZone are liable to side-channel attacks or fault injections based on monitoring a system’s power use or sending disrupting energy pulses.
“We’re in a position where you can’t trust anything—the hardware or the software, and the best cryptography is embedded in systems that aren’t secure,” said Neumann who moderates a forum and writes a regular column on security issues.
“The IoT cannot possibly survive in the long run if there is no security… There’s no hope if we continue on the path we’re on of putting more and more things online that can be compromised either directly or through the network they are on,” he said, calling companies that advertise they can secure the IoT “a fantastic fraud” and “all smoke and mirrors.”
The CRASH program has developed a formal spec for a 64-bit MIPS system that uses special instructions so “if you don’t have right credentials, you can’t get at an associated object, which might be an entire database or app,” he said. “We are using formal methods to prove the capability mechanisms cannot be bypassed or forged—given that kind of architecture, there is some hope something secure can be developed in the Internet of Things,” he added.
“Our capability system is unlike anything I have seen before…I think what we have done is a real step function over everything we know,” said Neumann who has worked on security systems since the 1970’s.
But the task is complex. He quipped researchers are now well into the sixth year of the four-year DARPA project and are “headed for maybe eight.”
Even if it’s successful it’s not bulletproof. “You still face key management issues, denial-of-service attacks and insider misuse like a Snowden attack, which is one of worst problems of all,” he said.
Next page: The data center’s bulging bottleneck
Google’s head of networking described the Herculean tasks the Web giant faces currently handling a quarter of the Internet’s traffic.
“We need a lot of help in networking, computing is at a crossroad and networking will play an out-sized role in what computing becomes,” said Amin Vahdat.
According to Amdahl’s law of needing a Mbit/s I/O for every MHz of computing, one of Google’s 50,000-server clusters needs 5 Pbits/s bandwidth. By contrast, the entire Internet is estimated to have an aggregate bisection bandwidth of 200 TBits/s.
“This is one cluster in one [25MW] building and we have dozens of buildings around the world…so we need all our buildings to have more bandwidth than the Internet,” he said.
The problem will only grow as Internet traffic and Google’s businesses grow. Handling network upgrades without disrupting services “takes fundamental architecture work,” he added.
As an example of the kinds of challenges Google faces, Vahdat described the company’s recently announced Espresso system for handling peering traffic.
It involved essentially extending the capabilities of the border gateway protocol (BGP) that is the basis for routing Internet traffic.
BGP does not predict the shortest path for a job as much as 20 percent of the time, Vahdat said. With Espresso, Google essentially maintains on servers its own route maps and traffic conditions on them using application-specific signals.
This kind of real-time control loop “has been very powerful…this is how we built our storage and compute infrastructure and now we have applied it to networking,” he said.
Vahdat said he can foresee ways to continue to upgrade network bandwidth for about the next five years. After that, the horizon becomes unclear.
However, processors will hit a wall in data centers before networks do, Vahdat said. For that grand challenge, he pointing to a talk at the event by Berkeley’s David Patterson who helped write the recently released paper on Google’s Tensor Processing Unit.
Patterson said the TPU is part of an emerging wave of special-purpose processors that will become increasingly popular as Moore’s law slows down progress in transistor performance.
— Rick Merritt, Silicon Valley Bureau Chief, EE Times
Related posts: