9.28.2012

Simplifying complex systems with Internet of Things


This article originally appeared in Internet of Things-Architecture's community newsletter and reprinted with permission.


In the next couple of years, there are expected to be 2 billion people connected to the Internet. At the same time, the instrumentation and interconnection of the world‚ human-made and natural systems, from cars to water to even cows, is exploding‚ which could mean that there soon will be more things connected to the Internet than there are people who are connected.

This Internet of Things promises to give people a much better understanding of how complex systems work, so they can be tinkered with to make them work better. But to open up these opportunities for everyone significant strides need to be achieved to make programming sensors and wireless networks easy. Thankfully there are several scientists including Dr. Thorsten Kramp at IBM Research in Zurich that are addressing this very problem.

"Sensor networks are instrumental in creating a smarter planet, therefore it is critical to make them easy to pro-gram", comments Thorsten Kramp, co-developer of a technology called Mote Runner. IBM Mote Runner is a run-time platform and development environment for wireless sensor networks. He adds "We invented Mote Runner to enable developers to take advantage of the skills they have and apply them to programming wireless sensor networks. This should proliferate the use of sensor networks around the world."

The IBM Mote Runner SDK bundles a set of tools to develop WSN applications together with the IBM Mote Runner edge server and the IBM Mote Runner firmware for selected hardware platforms. On-mote applications can be developed using Java and C#, and can be run in a simulated environment and on real hardware alike.

Applications further can be dynamically loaded and deleted over the air without requiring physical access to the mote. The SDK further facilitates source-level debugging in Eclipse against the simulation environment, and the development of web applications to visualize and inter-act with wireless sensor networks running IBM Mote Runner.

Thorsten has been with IBM for more than ten years. Initially he worked on IBM JCOP, IBM's JavaCard solution, which the company sold to NXP in 2007. Before IBM's involvement in this area everybody thought that Java was too slow on smartcards, until the IBM JCOP implementation proved that the benefits of programming smartcards in Java actually did not come at the cost of diminished performance at the application level. Today the code is on millions of VISA smart cards and electronic passports.

It was this experience in developing virtual machines for small devices with a matching tool ecosystem that sparked IBM's interest in wireless sensor networks and eventually led to IBM Mote Runner.

Aside from technical advantages such as being able to program a wireless sensor network in a high-level object-oriented language, IBM Mote Runner's shielding of applications from the underlying hardware by means of a virtual machine separates application developers from run-time platform providers and hardware manufacturers. As such, in combination with high-level APIs, it allows the encapsulation of underlying hardware particularities within the run-time platform which otherwise riddle application code and generate vendor lock-ins.


Mote Runner makes wireless sensor
networks easier to program
IBM Mote Runner hereby introduces a layer of interoperability at a level where different actors in the current IT business theoretically should be able to profit from. This process resembles key issues in IoT-A in a sense that interoperability (technically and semantically) has to be handled at one particular layer instead of being spread out across all layers and spilled into application code. Otherwise there won't be one Internet of Things but rather lots of Intra-nets of Things.

At the same time it is crucial that intelligence is distributed within the Internet of Things. That is, local sensor net-works and devices do not need to be connected to the Cloud all the time but may operate autonomously over extended periods of time. In an open office, for example, it makes no sense to light up the whole office when only one person is working late.

Yet studies show that with only a single lamp illuminating just the late worker's desk, humans feel uncomfortable and start seeing things in dark corners. A situation not really helping them to concentrate and focus on what they are there for in the first place. Manually configuring a pleasant ambient light setting is not a solution either, but a wireless sensor network of floor lamps based on IBM Mote Runner could do the trick. In this example, floor lamps detect their relative position to each other themselves and in combination with motion detection sensors determine a suitable ambient light setting automatically. Another example is active escape route signaling.

Here a wire-less sensor network running IBM Mote Runner dynamically classifies what escapes routes are possible in case of emergency based on conditions such as visibility, temperature, or toxic gas concentration. Since in emergency situations there may be no backend connectivity anymore, the system has to run and execute decisions on its own.

In general, as life is real time, things and situations change in real time as well. We therefore should not think about our systems as being ideal breakdowns are always possible. This, however, also has legal and liability implications. For instance, in the current legal frame-work it is virtually impossible for large clients like air-ports not to have default escape routes laid out and warning systems installed that are heavily connected.

These issues also have broader connotations for making decisions on a large scale. Holland, for example, aims at improving its water infrastructure to basically make decisions autonomously on the best real-time data and scenarios within the next couple of years as managers and civil servants might be too slow or overwhelmed by the amount of data at critical moments. So, in a way, one could say that their new role is more in the middle of the process, assessing procedures and outcomes in an iterative way, much more resembling the design process itself.

In its Smarter Planet vision, IBM identified these as real-world issues and focused, among other things, on harmonizing the back-end processes. Selling sensor nodes is not the priority for IBM, but sensor network most certainly are important as an interoperable backbone, as a means to an end. The IBM Mote Runner specs therefore are open and free (and, according to IBM's download statistics, used by quite a lot of people), for others to build functionally compatible platforms. IoT will engender and facilitate many networks and their need to be interoperable both from a technical and architectural point of view. That is where IoT-A comes in. As a concept of interoperability it is complimentary but it also addresses the higher ground of policy and contributes to the discussions on broader architectural issues.

In general, there are three types of sensor-network applications. Firstly, those people not knowingly interact with like active escape route signaling. Secondly, those that we somehow know of a little bit but not really can edit or modify. Home care and elderly care systems come to mind. And thirdly, those that people can set up, program, and interact with themselves, for example a wine maker putting sensors in a vineyard to monitor current soil humidity conditions and trigger irrigation systems in response.

Each type of application requires different types of interaction, reliability, liability, and security. IBM has focused so far largely on the first two, but as developments in the Internet of Things are enabling groups of people to self-organize more on all kinds of services, it is conceivable that they will be vectoring in other qualities, looking for the cutoff point of interoperability and plug and play in that third type of application.

In the past two years, for example, local pollution and noise and energy sensors grabbed a lot of publicity in the social networking and blogging sphere, whereas IBM has been negotiating with key decision makers in institutions, cities, and governments. It is also there that the social and psychological issues are becoming more important. And even though it may be that the Facebook Generation seems to be not really worried about the way they share data and personal information, there is another trend towards more privacy awareness. In many applications such as smarter ticketing, for example, privacy concerns have to be considered right at the beginning of the engineering and design process.

It is precisely because the back end and real world issues (the "front end") are coming together so fast that collaboration on infrastructure becomes so important, not only for IBM, but for all kinds of IT providers. One of IBM's core businesses is data analytics and data management making sense of large amounts of data, in a word.

That is what IBM offers and having a framework which allows interoperability on different layers then becomes obviously very important. IBM therefore welcomes and facilitates open standards and open platforms. If we can agree on not competing on but sharing functional specifications, then we can differentiate and compete on non-functional features such as performance, privacy, security and reliability. Nobody can live on its own which makes IoT-A such an important project as it tries to establish this kind of common understanding.

9.25.2012

IBM startup cuts a new path in nanolithography



I hope the next Mark Zuckerberg will be from Switzerland -- Dr. Thomas Knecht
In 2010, scientists at IBM Research - Zurich created the world’s smallest three-dimensional map of the earth (a GuinnessWorld Record) demonstrating a new tool to fabricate structures and objects on the nanometer scale.

But IBM isn’t in the nano-tooling business, so standard procedure to bring the tool to market would require a partner – similar to what was done with the scanning tunneling microscope in the mid-80s.

The two inventors, Felix Holzner and Dr. Philip Paul, took a different approach and decided to license the technology from IBM, bringing it to market themselves under the name of SwissLitho.

Philip Paul and Felix Holzner
Young entrepreneurs
Philip Paul (left) and Felix Holzner
Felix and Philip met while working together at IBM starting in 2009, when the tool was first conceived. In 2010, two papers highlighting the work were published in Science and Advanced Materials. It received high accolades from the scientific community and gave them the confidence to launch SwissLitho.

The collaboration between IBM and SwissLitho will extend beyond just patent licensing. A joint development agreement will be set up within the framework of an EU research project and a CTI development project. These projects aim to advance the technology of the NanoFrazor and will extend the opportunities for IBM to use the NanoFrazor for novel research applications.

Before packing up for their new offices, Felix and Philip answered a few questions.

When is SwissLitho officially going to be launched?

Felix: We actually founded SwissLitho back in January of 2012. We are slowly creating some marketing buzz with the launch of our website by doing some press interviews, and contacting potential customers. 

What does this nano-patterning tool do?

Philip: We call our tool the NanoFrazor and you can think of it as a nano-sized chisel, similar to what the ancient Egyptians used to create hieroglyphics.

Our NanoFrazor is an exciting new tool for the fabrication of nanometer-sized, 3D shaped devices and structures. Quality control and metrology can be performed immediately during or after patterning, ensuring very short turnaround times.  

The fabrication process is all-dry, direct-write, and is compatible with standard cleanroom fabrication processing.  

Scientists and nanotechnology producers can use this economical and user-friendly tool to fabricate and investigate quickly and easily the nanostructures that are increasingly needed for electronic, optical or quantum nano-devices.

What are your respective roles within the company, and how did you decide that?

Felix: Well, it was pretty clear from the beginning that I would act as CEO. I’ve been responsible for the business side from the very start.

Philip: My focus has been more on the technical development, so my business card will read Chief Science Officer. But as with any startup, we’ll be wearing many different hats.

Felix, you participated in “VentureLab” at ETH Zurich. Can you tell us something about that?

Felix: VentureLab is a special entrepreneurial workshop that provides coaching for startups. I needed the credits for my PhD, so it seemed a perfect match. It’s fairly competitive—they only take 25 out of about 150 applicants. The participants propose their own projects, and five of the 25 are then chosen as case studies. SwissLitho was one of them.

In fact, your NanoFrazor project ended up receiving the Venture and venturekick award. Congratulations.

Felix: Thank you, yes, this has really given the NanoFrazor some welcomed exposure within the investors community.

What gave you the final push to go ahead with your startup?

Felix: IBM had been looking for business partners to commercialize our nanofabrication technology. Negotiations were conducted for over a year, but in the end, we ultimately decided that our technology has become too valuable to hand over to another company.

Philip, what was your main contribution?

Philip: I joined the project to make the tool much faster and more ready for commercialization. I was hired to push its boundaries to see how fast it could go.

How did you come up with the name “NanoFrazor”?

Felix: First of all, the name had to be unique for branding purposes. We coined the word “Frazor” from the German word “Fräse”, which means “milling tool”, combined with “razor” to highlight the sharp tip that creates the patterns.

Who are your potential customers?

Felix: We’ll target academia to start, but eventually we also hope to attract industrial customers. The technology needs to be improved and refined before that will be feasible, however. Our goal is have our technology used throughout academia in order to improve it and find new applications.

Philip: Our technology is a good alternative to e-beam lithography because of its small desktop-sized footprint and its significantly lower cost. So in the longer term, we believe that major suppliers of scientific instruments could be very interested in our technology. There could be some good synergies with atomic force microscopy tool manufacturers.

Is your product “Made in Switzerland”?

Philip: For the most part, yes. The components are sourced from suppliers in the US, Switzerland and Germany. Don’t forget: at the moment it’s just a prototype, still in the cottage industry phase. We have the know-how to make our own electronics locally.

What about copycat or retro-engineering manufacturing?

Philip: Our invention is protected by five patents and the design is quite clever and refined, if we do say so ourselves (laughs).

Good luck on this exciting new venture.

Felix: Many thanks. Launching a new business is always a risky undertaking, but we’re optimistic it will work.

Philip: The NanoFrazor has so much potential. We are convinced it will be a boon to nanotechnology.

9.24.2012

Nanoliter-volume “Swiss army knife” for pathology and cell biology

A novel microfluidic probe technology being developed at the IBM Research – Zurich Lab could become a hot new tool in research laboratories and diagnostics labs.

Govind Kaigala
"Having one's vision and ideas vindicated and
acknowledged by long–term support is
a dream come true for any scientist,"
Govind Kaigala
The European Research Council (ERC) has announced its grant winners for 2012. ERC Starting Grants aim to support promising young scientists to establish their own research team for independent research.

Applicants may be of any nationality, but they must be based at some European university or research entity, have 2–12 years of research experience following their PhD, and have a strong scientific track record in a field showing great promise.

Competition for ERC grants is fierce: only a little more than 10 percent of the submitted research proposals win the prestigious and generous monetary award each year.

Two of this year’s winners, Govind Kaigala and  Armin Knoll, hail from IBM Research – Zurich.

We first caught up with Govind Kaigala, Research staff member, who earned the award for his project, BioProbe. Later this week, we will interview Armin.

What does this award mean to you professionally and personally?

Govind: Professionally, this grant allows me to consolidate and expand my research activities while giving me the confidence to take on longer-term research topics, which otherwise I might have hesitated to tackle. The success of MFP will be a significant step forward as a tool that may be used in research and diagnostics—this could be akin to laser eye surgery, a technology also developed at IBM.

Personally, it provides me the opportunity to be creative!

BioProbe
"BioProbe":
Local processing of tissues/cells
What is BioProbe?

The BioProbe project will take advantage of our ongoing research activities on the microfluidic probe (MFP). The MFP uses nanoliter volumes of liquid to perform local biochemistries on cell, tissues and other biological surfaces at the micrometer scale.

One theme within the BioProbe being actively pursued is the staining of tissue sections with multiple molecular markers to obtain more information of higher quality for the precise diagnosis of cancer.

The objective of the BioProbe project is twofold: First, to mature the MFP technology to perform unique chemistry and physics at biological interfaces and, second, to apply this technology to address a few pertinent problems in pathology and cell biology through interactions with colleagues in the life sciences.

You competed in the life-sciences track for this award. Isn’t that a bit surprising, coming from IBM Research?

Yes, at first glance it may not seem obvious. My objectives and goals for the BioProbe are to develop next-generation tools for use in cell biology and tissue analysis. This needs ideas and techniques used in and inherent to the physical sciences, in particular micro- and nanotechnologies, and that’s why the IBM Research – Zurich Lab is a wonderful fit.

I’ve found that being at arm’s length from a problem gives you the necessary perspective that quite often helps bring much better resolution.

What outcomes can we expect?

I envision that the MFP will be available in the next several years for use in research laboratories and diagnostics facilities to assist in research activities. To this end, we are working with the University Hospital in Zurich, ETH Zurich and partners in the pharmaceutical industry. In addition, this grant will help train personnel who will be adept at working in multidisciplinary research environments while being exposed to both basic and applied biomedical research.

BioProbe is just the beginning; we have so many more ideas in mind. My vision is that the MFP will become a facilitator for investigating previously unapproachable problems in cell biology and pathology by providing multifunctional capabilities. A lot like a Swiss army knife!

Monitoring power grids smartly in real time

The PhasorNet project will help energy companies process real-time streaming data with an eye to preventing major blackouts.

One of humankind's most ardent wishes is to anticipate disaster and avert it. While many of our efforts to foresee future problems and correct them fail for lack of timely information, at least one system -- the power grid -- lends itself to a technology solution that can identify disturbances before they become widespread.

An Open Collaborative Research project at IBM Research called PhasorNet has set out to help energy grid operators take corrective measures ahead of time to prevent blackouts and other system disturbances.

PhasorNet, a real-time monitoring system that uses an IBM stream computing analytics framework to assess data from grid sensors called Phasor Measurement Units (PMU), began as a research collaboration between a team at IBM Research - India, led by Deva P. Seetharam, and IIT Madras and IIT Kharagpur, two of India's foremost technical institutes.

With India coming to grips with an energy system whose grid all but collapsed in August 2012 -- leaving some 700 million people without electrical power for several days -- PhasorNet has the potential to help bring greater predictability and reliability to the country's energy infrastructure.

India's electrical grid is struggling to supply the country's transportation, healthcare and manufacturing infrastructure with enough power to meet an annual 9 percent growth target. But by year's end, India's Gross Domestic Product (GDP) is expected to reach only 5.5 percent.

Writing about India's "grinding energy shortage," The Wall Street Journal predicted two months before the system-wide blackout that India's energy "insecurity" would be the "largest constraint on the economy, one of alarming proportions."

India's grid gets a safety (Phasor)Net

To help address what The Economist called "the wider crisis in India's power sector," IBM's Seetharam and his team use PhasorNet's PMU sensors to continuously monitor the electrical waves on an electricity grid, and then share the resulting measurements with utility companies and regulatory bodies.

The collaboration with IIT Kharagpur is responsible for the real-time collection of data. The collaboration with IIT Madras focuses on a communications system that delivers data from many PMUs to a local data concentrator, ultimately feeding data into various analytical applications.

As the PhasorNet team wrote in Stream Computing-Based Synchrophasor Application for Power Grids, the emerging stream computing paradigm is "not only capable of dealing with [a] high volume of streaming data, but [it] also enables the extraction of new insights from data in real time." To this end, PhasorNet also ensures that applications are reconfigurable and scalable -- key requirements for a highly responsive grid system.

Building a team of stream computing researchers

“Our goal from the start with this new PhasorNet technology was to create an open research component so that the larger grid community would benefit,” Seetharam said. “A professor from IIT Madras spent time with us in Bangalore to understand the issues involved with networking, and then I visited IIT Kharagpur to focus on PMUs and data collection – where we have worked with interns from both institutes.”

In fact, Kaushik Das, an expert in power systems at IBM Research - India, started as an intern and stayed on to continue working on PhasorNet. Post internship, Kaushik and other new IBMers have extended their research into other related areas:
  • They have focused on where to place the relatively expensive PMUs -- at transmission/distribution substations -- to keep costs down.
  • They have performed Transient Stability Analysis tests to determine the source of severe system disturbances, such as the grid splitting into several sub-systems that cannot initiate corrective or preventive controls.
  • They have instituted estimation tests that contribute to the core of real-time synchrophasor applications.
Simulating a connected environment

Yet one more issue remains to be addressed by this or another OCR project: Setting up a network test bed between IIT Madras and IIT Kharagpur, and the IBM Research labs in Delhi and Bangalore.

“We would have liked to know how to connect these four stations with a real grid,” Seetharam said. “We wanted to study the data latency – how much time it takes for a packet of data to get from one designated point to another. But because of IBM India’s network policies, we could not receive data from external parties into the IBM system. The IITs had issues as well about opening firewall ports to receive data from us.”

Even so, researchers worked with IIT Madras to create a simulated environment, which could delay data packets and create other network delays. In short, the simulator looked as if Phasor Net was connected over a real network experiencing real network issues.

Although no real-time computational framework that scales well and runs numerous parallel applications yet exists for power grids, Seetharam points out that comparable systems already exist in the financial services industry.

“We know it’s possible for a vast, distributed system to make rapid-fire decisions based on large volumes of ever-incoming data,” Seetharam says. “We’re going to create an energy monitoring system that responds just as quickly and as sensitively as a financial trading system.”

9.20.2012

From Personal to PERSONALized Medicine


by Moshe Rappoport, Technology Advocate and Trend Expert, IBM Forum Zurich Research - Industry Solutions Lab

Personalized medicine has become a buzzword in the use of technology to transform medicine and patient care. Yet, healthcare is a people-based issue. It’s the intuition, the experience, the human sense and sensitivity that allow doctors and caregivers to excel in their profession. So the challenge is this: What can we do to smooth the transition to a medical world that is increasingly enhanced by, and dependent on, technology?

As someone who has regularly briefed CEOs from the healthcare and life sciences industries at the IBM Forum Zurich Research ISL in Switzerland, I have had the opportunity to see which technology projects tend to be most successful.  My answer to the challenge may surprise you, coming from someone who has been working at a world-famous science and technology lab for more than a quarter of a century.

IBM Technology Advocate Moshe Rappoport
(photo Mike Ranz)

Before I give you my impressions, let me share with you some of the major tipping points in healthcare informatics. Three key technological components are needed to create game-changing medtech solutions that will support our dream of evidence-based medical diagnostics and outcomes. 

Firstly, we need affordable and available technology for capturing patient data and novel diagnostics in real time.

Secondly, we require the ability to share health data securely through local and remotely interconnected communications devices (often called the Internet of Things).

Thirdly, we must have intelligent systems which are capable of combining and comparing patient data with large amounts of clinical data and, based thereon, proposing optimal treatments.  


I believe that today we can claim that we are just now reaching the tipping point on all of these requirements. For example, at the University of Ontario Institute of Technology, neonatal intensive care specialists can now monitor a constant stream of biomedical data, such as heart rate and respiration, enabling them to spot potentially fatal infections in premature infants up to 24 hours earlier than before.  Through deep analytics and a better understanding of population health, it will become increasingly possible to hyper-personalize medicine. For example, if a physician is treating a 45 year old Japanese female with high blood pressure, a history of smoking and breast cancer, increasingly they will be able to gather evidence-based information on specifically what treatment would work best for her. Analytics, including novel capabilities based on IBM’s Watson technology,  will help us look more closely at subpopulations that differ in their susceptibility to a particular disease or their response to a specific treatment.

All of these examples are first-of-a-kind efforts, and I expect to see significant advancements in the next few years—a golden era for medical technology. The timing couldn’t be better, as we are facing demographic and cost explosions that require radical new approaches to healthcare.


"I believe that I can state from my more than 40 years
of IT experience that the success of technology adoption
is usually correlated with the amount of effort spent in
designing systems optimized for human beings."
(photo by Mike Ranz)

So back to my original question about smoothing the transition to a technology-enhanced and dependent medical world.

I am convinced that we must plan from the very beginning—and not as an afterthought—to deal with a realistic personal view of the various people who will be using these systems. And we must continue to do so at every point during this transition phase. Our view must encompass all stakeholders: patients and their families, caregivers at all levels, administrators, government officials, payers etc. In other words, as we become ever more dependent on medical technology, we must not risk losing the human touch that is so important to the healing process. For me this involves fostering a feeling of trust on all sides.
 


Some of the factors that we will need to consider are: the ease of usability of medtech systems including easily understandable output results as well as transparency of complex processes. We will also need to be sensitive to the tech-readiness of different age groups, the rights of patients to be informed about their health in a sensitive way, and of course data privacy.

Already we are taking steps in these directions. Swiss start-up Nhumi, for example, is revolutionizing the way physicians interact with electronic patient data by providing the most intuitive interface you can think of: an interactive, browsable 3D-map of the human body. It provides doctors with the ability to easily look up and access the electronic health record of their patient. including medical notes, patient history, CT-scans, X-ray images, etc.  With so called Smart Rooms, patients at the University of Pittsburgh Medical Center are able to electronically follow their planned treatment protocol  from their beds—if medically and psychologically appropriate.  Patient empowerment has been shown to improve medical outcomes.  I personally recall, watching a seriously ill physician lying in the same room as my mother in a New York hospital, fighting her frustration in not being informed of her condition and the next treatment steps.

To further advance medtech solutions, we also have to gain a better understanding of the critical characteristics that inform the patients’ choices, actions and responses to their own health requirements. By truly understanding the individual patient, physicians and other caregivers can really influence their participation in their own health management.

I believe that I can state from my more than 40 years of IT experience that the success of technology adoption is usually correlated with the amount of effort spent in designing systems optimized for human beings.

As we move into an era of medtech-supported personalized medicine, we want to ensure our focus on the word personal at all stages.

9.18.2012

IBM’s Power 775 wins recent HPC Challenge

Starting out as a government project 10 years ago, IBM Research’s high performance computing project, PERCS (pronounced “perks”), led to one of the world’s most powerful supercomputers, the Power 775. This July, the Power 775 continued to prove its power by earning the top spot in a series of benchmark components of the HPC Challenge suite.

IBM Research scientist Ram Rajamony, who was the chief performance architect for the Power 775, talks about how the system beat this HPC Challenge.

How did PERCS become the Power 775?

Ram Rajamony: In 2002, DARPA (U.S. Defense Advanced Research Projects Agency) put out a call for the creation of commercially viable high performance computing systems that would also be highly productive.

Our response was named PERCS – Productive Easy-to-use Reliable Computing System. From the start, our goal was to combine ease-of-use and significantly higher efficiencies, compared to the state-of-the-art at the time (Japan’s Earth Simulator was the top-ranked supercomputer that year with a peak speed of 41 TFLOPS).

After four years of research, the third phase of the DARPA project – that started in 2006 – resulted in today’s IBM’s Power 775.

What makes Power 775 unique because of PERCS?

RR: It’s all in the software and hardware magic we put into the system!

PERCS chip design

PERCS blazed the trail for a whole set of new technologies in the industry. We produced the first 8-core, 4-way-Simultaneous Multi-Threaded processor – the POWER7 chip.

The compute workhorse is the 3.84 GHz POWER7 processor. We house four of these in a ceramic substrate to create a compute monster that has a peak performance of 982 GFLOPS; a peak memory bandwidth of 512 GB/s; and a peak bi-directional interconnect bandwidth of 192 GB/s. These advances resulted directly from the PERCS program.

Then, we coupled each set of four POWER7 chips with an interconnect Hub chip codenamed Torrent, that in turn connects to other Hub chips through 47 copper and optical links, and moves data over these links in excess of 8 Tbps. (No typo here. That is indeed eight tera-bits per second!)

Cool features abound, but one in particular is how the Hub chip can translate program variable addresses in incoming packets into physical memory addresses. When used in conjunction with a special arithmetic logic unit in the POWER7 memory controllers, we get amazingly fast atomic operations.

But it’s not just about the hardware. Through PERCS we added numerous innovations in areas such as the operating system, compilers, systems management tools, programmer aids, and debuggers. We even have a new language called X10 that developers can use.

What is the HPC Challenge, compared to the Top500, Graph500, and others?

Fast Fourier Transform

The FFT is an algorithm to compute the Fourier transform of a signal; transforming it from one domain, such as the time domain, to another, such as the frequency domain. FFTs are the backbone of signal processing and are used in a wide variety of areas, such as music, medicine and astronomy.

RR: The HPC Challenge suite was constructed to stress different parts of a system such as compute, memory bandwidth, and communication capability. It also contains components such as the FFT, which is difficult to make work at high efficiencies on computing systems – but which is often indicative of how entire classes of workloads will perform.

The HPC Challenge gives you a nice fingerprint of your system’s performance across numerous dimensions that show how a system may perform on a real-world workload.

For comparison, the Top500 rankings order systems based on their FLOP rate when computing the Linpack Benchmark. These rankings are biased towards indicating only a system’s compute capability. The newer Graph500 benchmark measures how fast you can traverse a graph and compute metrics similar to the Bacon number over a social network.

The Power 775 is notable for its “GUPS” and “MFLOPS” in the HPC Challenge. What do these measure? How are they different?

RR: Giga-Updates per Second (GUPS) and MegaFlops (MFLOPS) are as different as apples and oranges. (Actually, I should rephrase that because recent research has shown how apples and oranges are indeed very much alike, calling into question the validity of that analogy.)

MFLOPS measure the compute characteristic of a system – the number of floating point operations (in millions of FLOPs) that can be executed every second. Systems have a peak FLOP rating as well as a FLOP rating when executing various different workloads, such as the Top500’s Linpack.

GUPS measure the rate at which the system can perform random updates to a large set of values that are distributed across the memory in the system. The idea is to find out how well a system can handle a workload that requires extremely fine-grained communication with no locality. The lack of locality in this context refers to the fact that contiguous operations in time are directed at values stored in very different places. The GUPS workload has traditionally been brutal on systems, but is representative of workloads that just don't have the locality characteristics that machines are optimized to handle well.

What did your team do that put the Power 775 in the #1 position on the HPC Challenge?

RR: Well, we began many years ago with the goal of disrupting the status quo of interconnect-intensive workloads. Many of our performance metrics show linear scaling as the system size increases. In other words, for workloads like GUPS, PTRANS (which is a measure of the interconnect bisection bandwidth), and FFT (which is a workload that stresses all three compute, memory and interconnect elements) the system performance increases linearly with the addition of hardware.
Our performance on benchmarks like GUPS, PTRANS, and the FFT are truly disruptive. We are 13 times better than the current leader on GUPS, and over 3.3 and 2.7 times better on PTRANS and FFT, respectively.


This is unheard of in a typical system. In that sense, the Power 775 has been extremely disruptive as evidenced by the large margins by which we have taken over the number one position in the HPC Challenge results.

What does being #1 on this list mean for the Power 775’s capabilities?

RR: People have always grappled with how to structure large-scale computing systems. If you look at HPC systems in existence today, there is a spectrum of solutions with different compute and interconnect characteristics. Each of these solutions works well for the particular problem that it is used to solve.

The advantage of Power 775 is that it is a general purpose system. It has a completely homogeneous compute component which leads to a simple mental model of how the system works. The communication prowess of the system is forgiving of how programmers write their programs, making it easy to get high performance programs on the Power 775. 

And while the system is suited for general purpose high-performance computing, it shines especially well on workloads that need more interconnect performance and capabilities.

9.11.2012

Learning from sand castles to build future chips

In the United States, data centers already consume two percent of the electricity available, with consumption doubling every five years. In theory, at this rate, a supercomputer in the year 2050 will require the entire production of the United States' energy grid.

To address this challenge, IBM scientists are researching vertically stacked chips, also known as 3D chip stacks. Right now they are developing innovative manufacturing solutions using a natural phenomenon that children around the world appreciate every summer while building sand sculptures—capillary bridging in wet sand.


Another day at the beach

Water trapped between sand grains form so-called capillary bridges. They provide the needed mechanical strength of the sand to create giant works of art. But for electronic packages, they play a slightly different role as they support the formation of novel electrical or thermal interfaces by the self-assembly of nanoparticles.

3D chip stack design by IBM
A 3D chip stack consists of a stack of integrated circuit chips, including vertical electrical interconnects. Between each chip we need to provide hundreds of thousands of periodically arranged connections (that are separated by less than the width of a human hair) to provide the needed communication between the layers. Robust manufacturing processes are key to creating good electrical connections without defects.

IBM scientist Dr. Thomas Brunschwiler has lofty goals for this concept—to create supercomputers that will someday be the size of sugar cubes. But to get there, several challenges need to be resolved.

Before heading to the iMAPS conference in San Diego to present his research on novel interconnects by capillary bridging for the first time, he answered a few questions:

IBM scientist Thomas Brunschwiler
Q. What is the main challenge of developing 3D chip stacks?

Thomas Brunschwiler: One main challenge for 3D chip stacks is to keep the transistors at temperatures below 80 degrees Celsius while also considering the multiple chips dissipating heat to a shared heat sink at the backside of the stack. Hence, a low thermal resistance under-fill material is required in the space between the chips formed by the electrical connections. Improvements with traditional capillary under-fills have only resulted in moderate thermal performance.

Q. How are you addressing this challenge?

TB: Our concept is to use the self-assembly of nanoparticles by capillary bridging of liquid between individual micron-sized features between the chips. The nanoparticles in the liquid stay in the fluid during an evaporation process to form so-called “necks” between these micron-sized features. The “necks” could be electrical interconnects between copper pillars, or thermal conductive paths between particle beds filled between the chips by centrifugation. Electrical or thermal “necks” result, depending on the choice of the nanoparticle material.

Q: The concept of using necks based on sand castles is fascinatingly simple. How did you come across it? 

TB: Our initial investigations with traditional methods were not satisfactory and we ran into several roadblocks. So, we began to think out of the box. We met with our nano-assembly colleagues, including Dr. Heiko Wolf, and he suggested capillary bridging.

We conducted several tests and the concept worked very well right at the start. From the first principle, several new ideas were born subsequently.

Nanoparticles were assembled by 
capillary bridging between the
 micron-
sized spheres, forming 
so-called “necks”


Q. What type of reaction do you expect at the iMAPS conference?

TB: I could imagine engineers will be fascinated by the simplicity and the performance of the proposed solution. At the same time, they will also be skeptical of a possible robust process for high-volume manufacturing. Up to now, we have been performing tests on lab-scale samples to prove the concept. Many more investigations will be needed to yield a high-end manufacturing process.

On the other hand, the concept could be applicable in other fields, such as front electrodes for solar cells. But as with any science, we need to start small and learn to provide a solution ultimately.  

Consider that in 2006, we had a simple lab concept for a hot water-cooled supercomputer. And in 2012, the fastest supercomputer in Europe, SuperMUC, is based on that design. Progress takes time.

Q. So with all these challenges ahead, what is next?

TB: We will soon start a European-wide collaboration with high-tech industries, research institutes and universities to investigate this technique further. So we need to connect with our partners and begin to scale up the technology.

Good Luck in San Diego.

TB: Thanks, I’ll let you know if our concept resonates with the audience.

9.05.2012

SuperMUC

Great Minds: student interns in Haifa and Zurich

In 2012, IBM Research Labs in Haifa and Zurich hosted nine students from seven countries. Before packing their bags and heading home, a few of them left comments and tips for future Great Minds applicants.

Adela-Diana Almasi

Adela-Diana had set the bar very high for the renowned IBM Research Lab. She comments, "I am very excited to be here. I came to Zurich with very high expectations and they have all been met. I have the chance to work alongside highly skilled and passionate researchers who study a very large variety of topics. One thing that I particularly appreciate is the fact that here, as interns, we have the freedom to make design decisions on the project that we are part of. Our opinions and input are valued."

Adela-Diana Almasi
PhD student
Computer Science
Polytechnic University
Bucharest, Romania
The budding computer scientist was accepted to the Great Minds program after submitting a paper on creating more adaptive distributed systems through machine learning techniques. "I am currently developing a learning algorithm for a neural network on a graphics card. Because it's quite close to my interests in machine learning and distributed/parallel computing, it has offered me the chance to gain more practical experience in this field. The manager with whom I had my phone interview took a direct interest in finding the most suitable project that matched my experience and interests. This says a lot about the way IBM values its employees."
Asked for her tips for future applicants, Adela-Diana said that confidence is critical. "I've often seen many talented people who are too self-critical and lack confidence in their own abilities. You shouldn't let that get in the way. IBM Research has a strong reputation for finding talented people, so if you are smart and passionate, you have a good chance of being accepted."

Janos Csorba

For Janos Csorba, the 2012 Great Minds internship at IBM Research - Haifa has surpassed his most optimistic expectations.
"I can't believe how much I'm enjoying this position," the Györ, Hungary, native explained. "I didn't think it would be this great."

Janos Csorba
Master's student
Computer Science
Budapest University of
Technology and Economics
Hungary
A graduate of the Budapest University of Technology and Economics with a B.Sc. and M.Sc. in Computer Science, Janos's advisor at university originally recommended he apply to the IBM internship.
I just finished my Master's degree a few months ago, so the position here couldn't have come at a better time. It has been a wonderful opportunity to see a great company and experience a different culture."
In Haifa, Janos has been working on truck design configuration as part of the Lab's Constraint Satisfaction group. He's really enjoying the work on the project, which has given him the opportunity to interact directly with IBM clients.
"I joined an active project, with real-time communication with clients and problems that need to be solved on-the-fly," he noted. "It's a really good feeling to see how changes you make to the code of a project create real impact on a client's application."
He's also having a great time traveling in Israel, which he's managed to do somewhat on weekends.
"This is a really interesting country, with so many cultures living so close together. It's totally different seeing things here on your own. Since I've arrived, I've felt extremely relaxed the entire time. I'm really enjoying being here."

Daniela Dorneanu

The IBM Lab in Zurich made an instant impression on Daniela on her very first day. "IBM Research - Zurich is a great place where you have the chance to work with first-class scientists who involve most of their energy in projects they are passionate about. At the end of the day, what can be better than working in the field that you enjoy most in a top company? At IBM Research - Zurich I feel very lucky to have the chance to gain experience in exactly my field of choice and to work towards innovative results."

Daniela Dorneanu
Master's student
Computer Science
Polytechnic University
Bucharest, Romania
Daniela also had the chance to work on her special interest. "I am following a double degree Master's program at Polytechnic Universities from Milan and Bucharest. My courses are focused on security and distributed systems, but I have a hidden passion for Linux and the open-source world. Over the past several months at IBM, I have been analyzing the security metrics for the Linux kernel, which is the topic of my Master's thesis."
Daniela is one of three female students from Romania. However, this is not representative of the current reality in her field. She comments, "There are very few women in computer science and it was nice to come to IBM to see more women involved in research. I think our main assets for working in this field are good intuition, determination and calmness."
Before packing up, Daniela had some parting advice for future Great Minds students. "To stand out when you apply, I think you should find a topic that IBM is interested in and something that you enjoy. If you do it right, it will show in your application."
Read three more interviews here »