10.31.2012

Reading a mind in pain


Editor’s note: This article is by Dr. Guillermo Cecchi of IBM Research’s Computational Biology Group. 

Pain, whether dealing with a healthy or sick person, is an enormous part of medicine. Yet it is poorly understood. Consider this: we are often asked to rate our perception of pain intensity, by reporting it in a scale between 1 and 10. In this way, pain can be measurable. However, pain is a highly subjective phenomenon, heavily determined by perceptual and cognitive mechanisms, so that individuals’ pain perception levels vary widely.

Chronic pain affects at least 10 percent of the population

[source: Harstall C and Ospina M (June 2003). "How Prevalent Is Chronic Pain?". Pain Clinical Updates, International Association for the Study of Pain XI (2): 1–4.]


But there are patterns. I have been working with with my colleague Irina Rish, and Northwestern University’s Dr. Apkar Apkarian at his Pain and Emotion Lab for several years to find those patterns in function Magnetic Resonance Imaging (fMRI) scans. And this month, our paper Predictive Dynamics of Human Pain Perception detailed findings based on experiments carried out to understand the emergent properties of functional brain networks shown on these scans.

We demonstrated that subjective responses to pain can be captured by a unique model, through fMRI, where individual differences are determined by a handful of parameters. A physician could then use this knowledge to personalize patient diagnosis and treatment.

Measuring pain

To be clear, none of the study’s 26 participants were harmed. We measured brain activity based on stimuli delivered via a thermal plate at different points on the skin. They were told that the plate would change temperature, but not when or to what temperature.

We describe this in the paper in more details, but the mind displays three features when in pain:

  • “First, the pain must signal the threat of tissue damage. This is determined by the current value of the skin temperature. The signal magnitude must consistently increase with the temperature, although not necessarily linearly (as in fact, tissue damage is not linear with temperature).
  • “Secondly, this signal magnitude [registered in the brain] must anticipate the possibility of damage – sounding the alarm of an imminent threat, given the recent history of temperature values, independently of the current temperature. This information can also be partially captured by the skin temperature’s rate of change.
  • “Finally, given its powerful hold on behavior, the intensity of pain perception must quickly decay once the threat of damage disappears, so as not to interfere with [a person’s] other ongoing mental states.”
The raw value of the temperature is just one of the drivers of perception. Our work also showed that there are other components, such as the need to anticipate potential damage, even when the current temperature is comfortable, and the need to forget quickly, even when the current temperature is high, but falling.

Our work represents the first model of pain as a perception with a predictable, deterministic signature. Beyond pain, there are very few examples of similar models for other mental processes, and our work shows the power of simultaneously modeling mental states and brain activity.
And the brain activity of our participants reflected these components.

In fact, the brain areas that can be used to infer the raw value of the temperature are different from those that are used to infer pain perception. Moreover, combining the inferred pain perception from fMRI readings with a prediction based on the inferred temperature gives us the best predictive model.

Helping doctors help your pain

By showing that subjective responses to pain can be constrained by a handful of parameters, doctors can personalize patient diagnosis and treatment. Our “mind reading” approach of using fMRI also implies that it is possible to infer the level of pain a person is suffering even when he or she cannot report it verbally, or otherwise. We hope that it will also be the basis of a more accurate prognosis for those patients with risk of developing chronic pain (for instance, those with prolonged back pain).

The next steps in this study are to find out how the model changes for patients with chronic pain, in the hope that it will reveal specific mechanisms disrupted by the condition. And we will also study pharmacological effects on the model to answer questions such as: which part of the model is affected by analgesics and anesthesia?

Other articles by Dr. Cecchi: Diagnosing psychosis with word analysis.


10.29.2012

Carbon nanotubes to keep up with Moore’s Law


Editor’s note: This article is by Dr. Hongsik Park and Dr. Wilfried Haensch. Dr. Park is a research scientist at IBM’s Thomas J. Watson Research Center. Dr. Haensch is a senior manager and the Carbon Nanotube project leader at IBM Research.  

The end of silicon microprocessors is near. Well, the end of continued performance improvements in silicon chips is near. One of its promising successors in the Moore’s Law race – we, at IBM Research think – is carbon nanotubes. And our team successfully fabricated and evaluated 10,000 carbon nanotube transistors on a single chip by precisely positioning nanotubes on designated sites on a substrate. 

We expect carbon nanotubes’ promise of sub-10nm chips will mean aggressive scaling toward smaller chips. And increasing the number of cores on a chip would also provide an even higher degree of parallel processing. Our models show that carbon nanotube chips would have about a five to 10 times improvement in performance compared to silicon circuits.

Lining up carbon nanotubes with a density higher than 1010/cm2 is just one of many challenges to scale and mass produce this technology before replacing silicon.

Working at nanoscale

Carbon nanotubes and nanowires are smaller than the resolution capable of an optical microscope. We use a scanning electron microscope and atomic force microscopy to “see” them. To fabricate nanoscale devices with a resolution smaller than 100 nm, we utilize electron beam lithography. 



For example, to fabricate 10,000 aligned carbon nanotube transistors we first fabricate a substrate that has 10,000 sites for each transistor. Through a chemical placement method, we put individual carbon nanotubes on a designated site – only then can the transistors be formed on the carbon nanotubes using conventional semiconductor fabrication facilities (shown in figure 3a in our paper in Nature Nanotechnology).

10nm reality

A carbon nanotube microchip won’t look any different to the naked eye than today’s chips. But replacing active layers of silicon requires scaling all of the manufacturing processes down to 10 nm. This means we must improve the efficiency of putting carbon nanotubes on the substrate, or we may have to develop new chemical methods to uniformly place multiple carbon nanotubes onto sites for the transistor.  

Once manufacturable, carbon nanotube chips will have more functionality than conventional silicon chips. The first place we’ll expect to see carbon nanotube logic chips is in the high performance space, such as server chips and business transaction machines that need high single-thread performance.

Manufacturing challenges not withstanding, we believe carbon nanotube transistors could reach a 5nm channel length. And carbon nanotubes are also being pursued as material in transparent electrodes for display or photovoltaic devices, and many other applications.

10.25.2012

Setting the standard in Internet security

IBM Research scientist earns ISSA Hall of Fame recognition.

Dr. Wietse Venema, IBM Research's authority on Internet security, has been inducted into the Information Systems Security Association's Hall of Fame for lifetime achievements which include Postfix, the Coroner's Toolkit and many other information security applications. In this interview, Dr. Venema talks about what inspired some of these tools, today's security issues of the mobile Internet, and more.

Q. Your work on security goes back to 1990 with TCP Wrapper. How was web security approached at that time? Did TCP Wrapper defend networks against
threats that still exist today?


Wietse Venema: To set the scene, the worldwide web was not invented; the Internet connected mostly universities and large-company research labs. Firewalls were almost non-existent. Microsoft's Windows did not have Internet support until four years later. And most "computer hackers" did not work for governments - or criminals.

As my early contribution to security, TCP Wrapper implemented a burglar alarm and firewall for server applications, at a time when firewalls were still exotic things, and people had no idea what was happening on their computer networks.

Even today, many server applications, including the SSH (secure shell) server, support TCP Wrapper rules that can block unwanted connections. However, most of today's systems (both clients and servers) have a firewall built into the network protocol stack.

Q. Fast-forward to 1996, email was still a new technology to many. What particular issue inspired the Postfix mail server?

WV: UNIX was the dominant server platform, and Sendmail was the dominant mail server application. Originally developed 15 years earlier for a much friendlier network, Sendmail had a history of serious security holes that allowed hackers to take remote control over computer systems.

The rationale for work on Postfix was that a more secure infrastructure would make people more confident to use the Internet for e-business. And of course what's good for e-business was also good for IBM.

Today, we're banking, shopping, and tweeting on the web -- and on the mobile web. Did this expansion of how the web is used introduce new security threats, or just new avenues for existing threats?

You have this computer in your hand that is more powerful than a desktop machine from 10 years ago, that is on the Internet all the time, and that you use for electronic payments for all kinds of
personal information, and nowadays even to access sensitive data at work.

Bringing all of this information together on the same device creates new opportunities -- not only for legitimate users of those devices, but also for those with other intentions. Let's suffice to say that a lot of work lies ahead of us to ensure that this great technology remains safe to use.

Q. How do you personally securely surf the web? Any tips for individuals to consider, beyond trusting off-the-shelf security software?

WV: Many (but not all) attacks take advantage of similarity. People are running the same versions of the same programs on the same operating systems and hardware platform. Many attacks target monocultures, and we know from biology how vulnerable a monoculture can be.

I rely on software diversity. I don't use the exact same web browser as many other people, and I don't use the exact same operating system as many other people. That doesn't make me 100 percent secure but it makes the attacks more expensive, and that is all that really matters. In the past, I have also used different hardware from many other people, but it has become unaffordable.
We can also read about security breaches of large companies on a near-daily basis. What issues are businesses asking you and IBM Research about?

IBM Security Research is currently helping companies to find out where their valuable information is stored; how that information moves around; and what can be done to protect that information.

Just like your money does not sit in a safe all the time, valuable information does not sit in a database all the time. It moves around as people handle it as part of their jobs, and may end up in environments that have insufficient protection -- whether by accident, or not.

Q. If we could start over and rebuild the web, how would you make it secure?

WV: That, unfortunately, is not just a technical problem. Our systems reflect the conflicting needs for performance, cost, ease of use, security, and many other needs from businesses, consumers, and other parts of society.  Some of my work has shown "in the small" that a system can be secure, low in cost, easy to use, and perform, all at the same time.

To achieve security "in the large," many people would need to agree on what course to take, while at the same time avoiding the problems that monocultures can bring.

Given the conflicting needs, making many people agree is hard. More often, one company goes out and leads by example. IBM is in a good position to do such things.

Coming back to the question, I don't think that the Internet will ever be rebuilt. Instead, every four years or so we have been putting another layer of functionality on top what already exists - social
networking being the most recent one. With each new layer comes new benefits and risks that we must learn to live with.

The net grows in layers just like the large cities of ancient civilizations.

10.24.2012

Institute models the utility of the future


Dario Gil, IBM Research’s director of energy and natural resources
Editor’s note: This article is by Dario Gil, IBM Research’s director of energy and natural resources.

Lightning hits telephone poles. Wind knocks down power lines. Mother Nature and power outages just seem to go hand-in-hand. IBM Research wants to help energy and utility companies with technology that predicts the future state of their assets – not just reacts to when they need repairing or replacing.

That help is coming in the form the Smarter Energy Research Institute. IBM, along with Canada’s Hydro-Québec; the Netherlands’ Alliander; as well as DTE Energy in the U.S. are researching and developing techniques that improve the balance between energy supply and demand using predictive analytics, optimization, visualization and advanced computation.

Deep Thunder

Deep Thunder is the weather forecasting element of the Insti-tute’s array of analytics and op-timization capabilities. Typical weather forecasts are on a scale that is too broad for a utility to use as a prediction of what they (and their customers) may face. It gives utilities a modeling of the weather at a service territory level, improving accuracy and specificity of the forecast – including the impact.
Sensors already tell us about the current or historical state of an asset, and can indicate what changes to make based on their environment. The Institute wants to further automate new predictive capabilities of a utility’s assets by layering algorithms that connect new data, such as weather information, on top of the data already available.

Improving reliability through automated prediction

Today’s distribution networks must incorporate all different kinds of energy sources, from renewables such as wind and solar, to standard coal, gas, and nuclear energy. But electric distribution networks typically have a low-level of instrumentation (a utilities manager might describe the situation as being “blind but happy”). And while there is a trend to increasingly instrument these networks (smart meters being a salient example), analytics and simulation will play a key role in achieving the desired objective of increasing the visibility of the distribution grid.

Let's look at the power of micro-forecasting as an example.

Not your local weather report, IBM’s Deep Thunder forecasting technology makes hyper-local accurate weather predictions, and helps measure the potential impact that weather may have on utilities. A utility company could use these outage predictions to assess the actions to take before a storm hits; the likely damage it may cause; and the restoration resources required.

Smart meters are great for telling the grid – and you, the consumer – about your home’s electrical load every 15 minutes. It’s even better when the smart meter can automatically adjust its load based on grid conditions – and even know to notify repair crews of weaknesses in the system, before a storm hits.

The Institute will create these models for utilities by introducing everything about and around the network: the location of trees in relation to power lines and utility pole locations, to how the entire distribution network is laid out; to even soil moisture (this makes a difference in gauging tree and root strength).

Smarter Energy Research Lab
Smarter Energy Research Lab
Predicting the uncertainty of renewable energy

Imagine a distribution company. They own the wires that carry the power to consumers. Their responsibility is to make sure that all the producers of power can push their electricity out to customers, and everyone is billed correctly.

Now introduce renewables into the mix. One house in the neighborhood with solar panels? No problem. A few electric vehicles charging over night? No problem.

What about 20 percent of an entire continent depending on wind, water, solar, and other renewable sources?

The European Union wants 20 percent of their member countries’ electricity to come from renewable sources by 2020. Fluctuations and instabilities will happen without an effective way to predict what will cause a change in supply or demand; and without a way to automate the actions to take based on those predictions.

Smart predictive meters and other responsive devices could also shift loads to places of need, say to a neighborhood charging hundreds of electric vehicles – and use distributed energy resources such as wind and solar.

The Institute’s projects will even sprinkle some behavioral psychology into its models to better predict usage patterns and what incentives might motivate a consumer.



The Smarter Energy Research Institute will operate as a collaborative research venture across the world. Hydro-Québec is one of the world’s largest hydroelectric power producers and the only North American electric utilities operating its own research center IREQ. Alliander is a major Dutch energy distributor specializing in renewable energy, serving three million customers in the Netherlands. DTE Energy is an investor-owned diversified energy company involved in the development and management of energy-related businesses and services across the United States.

Chip verification made easy

Editor’s note: This blog entry was authored by Laurent Fournier, Manager of Hardware Verification Technologies at IBM Research - Haifa.

When I tell people that I do pre-silicon verification for a living…well, you can imagine the yawns. Yet, without me—OK, without teams of people like me—computer functions that we take for granted or think of as simple, like making an ATM withdrawal, might not work so well.

I'll bet then when you go the ATM and take out $20, you don't worry that $20,000 will be credited to your account. We expect that as long as we use the correct language to tell a computer what to do, it will do it: clicking on “Open” will open a file, and 2 + 2 will be 4.

But in reality, a million different factors could put these processes at risk. What if you try to open a network file at the exact same time as someone else? Or what if you have 20 other files already open? What about 200 files? The commands may not have been written to take these factors into account. In fact, developers cannot possibly consider every scenario when they develop a program. That's where verification comes in.

The tools that my team at IBM’s Research Lab in Haifa, Israel develops are meant to increase confidence that a computer works as it should. We can all grasp the importance of this when it comes to our bank accounts, but that’s just one example. Think of computers that help doctors dispense medicine or that dispatch emergency services.

Developing a processor entails two phases. In the first stage, the design phase, developers use HDL (hardware description language) to write instructions that describe how a processor should work. The next phase is the silicon phase. This is when the HDL instructions are transformed into an actual chip.

In between these phases, pre-silicon verification tools check the design before anything physical is built. The tools we develop in the lab generate tests to check that a chip functions as it should. For example, today's processors can execute several instructions simultaneously. But sometimes a mistake in executing one instruction is only revealed when a specific combination of instructions are also completed at the same time. Our process can predict these scenarios before they turn into bugs.

So, how did people verify processors before they had test generators like the ones we develop?

Life was simpler then—and so were processors. Chip verification tests used to be built manually. As processors became more complex in the late 1980s, IBM built the first automatic test generator called RTPG (random test program generator) as a means to test the architecture for IBM Power processors.


Verification has been one of IBM’s best investments; saving several hundreds of millions of dollars in development costs over the last 20 years.

As IBM went on to develop different processors for different architectures, a new tool had to be created for each one. Developers soon realized they needed a tool that could handle any architecture, so the model-based test generator (MBTG) was born.

MBTG is comprised of a generic engine to handle the issues common to all processors and also an engine modeled after the specific architecture being tested. At this point, other companies began contacting IBM to develop tools for their processors—we even helped verify the Intel x86 architecture.

The tools we build today do what's known as dynamic generation. After each instruction is generated, a developer can gauge the exact situation of the processor and then determine the next instruction to test. They have evolved from the random checking done in older verification methods to performing what is called biasing. Biasing allows for randomness in a controlled fashion—basically, we adjust the parameters during testing to ensure that that we cover all bases and find any bugs that might only turn up under certain conditions.

Verification has been one of IBM’s best investments, saving several hundreds of millions of dollars in development costs over the last 20 years. Our goal is to further simplify the test-generation process by adding an automation layer on top of our verification tools. This will automate the creation of input files, significantly cutting back complex work efforts and hopefully reducing overall verification cycle time.

So, as we expect more from our technology—from banking apps to medical care—our team plans to have the tools in place to verify that they work as they should.

10.23.2012

Data-driven Healthcare Analytics: From data to insight for individualized care


Editor’s note: This article is by Dr. Shahram Ebadollahi, senior manager of Healthcare Systems and Analytics Research at IBM Research.

The more longitudinal medical data records that are becoming available should mean that healthcare providers – from nurses and public health officials, to specialists – have more insight into helping solve their patients’ problems in the here and now. But now, the challenge is how to elegantly analyze all that data and derive insights from it to help those providers deliver better care to their patients.

My team at IBM Research developed the foundational analytics for a healthcare solution, now called IBM Patient Care Insights. These analytics can take into account all patient characteristics, such as treatments, procedures, outcomes, costs, etc. – basically everything about a set of patients that could be observed and captured over time (even the unstructured information, such as a doctor’s notes on a chart).

The data, in a sense, captures the collective memory of the care delivery system and embedded in it are insights about all the procedures and outcomes for all the patients. The analytics that can help us extract that insight promises to lead to better, more-efficient, and lower cost patient care. 

How IBM Care Insights derives insight from population data to better personalize decision making.

Medical data: analyzing, visualizing, predicting

So, how does Care Insights make sense of years of data, from multiple sources, about thousands of people? All to give healthcare providers a way to identify treatment and early intervention options.

Care Insights’ suite of tools use innovative algorithms rooted in machine learning, data mining and information retrieval techniques to look for patient similarity to derive tailored insights regarding a custom course of actions that are delivered through easy-to-understand visuals.

The Patient Similarity analytics tool finds all patients who display similar clinical characteristic to the patient of interest. The resulting individualized insight includes suggestions on how to manage care delivery to the patient, but perhaps more-importantly predicts health issues that could arise in the future (because patients with similar characteristics had experienced such health issues).

It can then match patients to specific physicians or specialists who can potentially provide a better outcome; understand and analyze utilization patterns (utilization of resources in the care delivery network) of patients, and identify abnormal utilizations (over utilization or under utilization), which could be an indicator of a potentially poor outcome or unnecessarily high cost.

The similarity analytics suite of tools can also predict the patient’s potential future adverse outcomes and conditions. Therefore, it can identify opportunities for early intervention. 

Visualizing the evolution patterns of patients with similar attributes to the patient of interest.

What about Watson?

Watson can provide tailored and to-the-point answers with supporting evidence to questions based on the corpus of knowledge it is connected to. IBM Care Insights complements this knowledge-driven evidence, obtained from medical knowledge sources, with data-driven insights derived from the large patient population medical records discussed here.

10.21.2012

Tribbles, Spiderman ... and Tribology?

Trekkies might first think of the television episode titled The Trouble with Tribbles” upon first hearing about the niche scientific field of tribology. Tribbles were tiny furry creatures that invaded the USS Enterprise causing chaos for Captain Kirk and crew. Tribology though, would only relate to tribbles if Mr. Spock had examined how their fuzzy surfaces interacted because tribology is the study of friction between two interacting surfaces.

In a real lab on earth, IBM scientists Bernd Gotsmann and Mark Lantz have actually done experiments on heat transport as part of IBM’s overall interest in developing energy efficient 3D chip stacks. Roughness plays an important role in understanding how heat is transported at an atomic scale across interfaces, like between stacked computer chips. And their work on heat transport across nano-scale interfaces was recently published in "Nature Materials."

Dr. Bernd Gotsmann
Bernd and Mark answered a few questions about their research.

Q. What is a good example to explain the science behind tribology?

BG: While Spiderman isn’t real, I think everyone can appreciate the following analogy. When Spiderman climbs on the side of a building his finger tips come into contact with the bricks.  Adhesion forces acting on such contact areas of say a square centimeter are strong enough, in principle, to hold Spiderman. But in reality things do not stick so easily due to irregularities and roughness of the surfaces. In fact, you may find that by looking at his fingertips only a square millimeter will actually be in contact with the surface.

Now imagine looking at this same square millimeter with a microscope. You will find that the roughness will prevent most of the smaller touch points to make contact. Now zoom in even further and again, there are even fewer contact points.

So now the question is whether the notion that roughness governs the real contact between surfaces can reasonably be scaled down to the atomic level.

Dr. Mark Lantz
Q. Why is this important for making future 3D chip stacks?

ML: One of the critical challenges for scientists in developing 3D chip stacks is heat dissipation.  IBM is currently developing chips that use water cooling on the back of the chip to keep them at operating temperatures. But chips are made of silicon, which at an atomic scale is rough. And when several chips are stacked together the dissipation of heat across the entire surface of the chip is strongly influenced by the roughness of the interfaces between the chips.

Q. What findings did you report in your paper?

BG: Through a series of experiments we measure how heat flows across contacts that are extremely smooth, having roughness only on the nanometer scale and below.

We previously could not explain this data using conventional understanding. We then asked ourselves, what would happen if atomic scale roughness forces the contact to be governed by individual atoms? In this case, thermal transport is easy to describe because each instantaneous atom-atom contact carries a certain amount of heat, called quantized conductance. We found that this explains the data very well.

Atomic roughness of two surfaces
Q. What reaction do you expect from other scientists?

BG: Well, despite the concurrence between the data and the explanation, we unfortunately cannot look directly into such contacts and see which atoms make contact and which don't. We therefore hope that other scientists also apply the same notion and gather further support for our hypothesis.

While the questions raised fit into a current debate in the nanotribology community, the impact on the understanding of thermal transport could be just as large. Our work implies that thermal contacts are much easier to describe than previously thought. I am eager to learn whether our understanding holds for only few a few special laboratory cases or if it is relevant for more general and technologically relevant systems.

Fractal Geometry

If you are familiar with IBM Fellow Benoit Mandelbrot’s work in fractal geometry then you already have a good basis for tribology. Fractals are used in modeling uneven or rough structures in Nature, like an eroded coastline, in which similar patterns recur at progressively smaller scales. Fractals use math to explore these rough irregularities.

Q. What are the next steps for your research?

ML: A very exciting extension of this work would be to perform similar heat transport measurement on sliding surfaces. Our research, to date, has only looked into static contacts formed by pressing two surfaces together, in order to investigate how the atomic scale roughness of the surfaces effects heat transport between them.

In contrast, tribology encompasses the study of interacting surfaces in relative motion. In this dynamic case, the roughness of the two surfaces is thought to play a critical role in the resulting friction and wear.  However, the details are not well understood, especially at the atomic scale. At this level, it is difficult to look into the contact to understand what parts of the two surfaces interact and how this changes with time.

Our work on static contacts has shown that heat transport can be used as a kind of fingerprint to study the nature of the contact on the atomic scale. Therefore, by measuring friction and heat transport simultaneously, we have a means to see into the contact. We could then study the nature of the atomic scale interaction between the two surfaces as they move, and then relate this to the measured friction to gain insight into the basic physical mechanisms that cause friction and wear.

"Spiderman" fans should take note and follow this research.

10.18.2012

Storage drivers take advantage of OpenStack


Avishay Traeger
Editor's note: This blog entry was co-authored by Avishay Traeger and Ronen Kat, storage researchers at IBM Research - Haifa.

Started two years ago by Rackspace and NASA, the OpenStack Foundation was envisioned as an open and common code base for businesses to build public and private cloud infrastructure – while sharing common approaches to management services and APIs.

Ronen Kat
In short, OpenStack is free open source software for cloud infrastructure and management. It’s growing fast with more than 150 member companies, worldwide. IBM joined the foundation in April 2012, seeing the foundation’s platform as an important way to help clients use the cloud, as well as promote a ubiquitous Infrastructure as a Service (IaaS) cloud platform for public and private clouds.

What about Nova-volume and Cinder, those storage drivers from Research?

The Nova-volume and Cinder drivers connect IBM storage products to OpenStack

IBM researchers in Haifa, Israel developed drivers that connect new IBM storage devices to the cloud – taking advantage of OpenStack automated provisioning, improved management, and more efficient compression.
The storage in OpenStack cloud environments can be provisioned using a self-service model that provides storage on an as-needed basis. Using this model, users only get and pay for what they really need, and the cloud provider does not have a situation where cloud storage space is wasted due to over-provisioning.

The Nova-volume and Cinder drivers let those with IBM storage products from the Storwize and SVC family take advantage of OpenStack's simplified cloud deployments and automated storage provisioning.

Connection is just the first step in using the new OpenStack platform. Our vision goes beyond features like automated storage tiering with IBM Easy Tier, Real-time Compression, and space-efficient storage in our products. The next steps will include advanced features, such as the enablement of differentiated types of capabilities through quality of service (QoS).

10.16.2012

Superconducting at room temperature?

IBM Research scientists adopt techniques from spintronics to pursue the answer 

Discovered a century ago, superconductivity promises to drastically improve storage and memory devices, create highly sensitive sensors, and make energy transmission cheaper. The challenge now is that the highest temperature superconducting material – demonstrated 25 years ago by IBM Research scientists – is liquid nitrogen, which superconducts at 77 Kelvin (-321F). This Nobel Prize-winning discovery still stands as the highest temperature superconductivity proven and recorded, but scientists worldwide are after higher temperature superconductors.  

“A superconducting wire the diameter of your thumb could carry as much power, more efficiently, than a copper cable the thickness of your arm,” said Kevin Roche, a scientist at IBM Research – Almaden.

IBM Fellow Stuart Parkin
Following the principles of physics demonstrated by Mueller and Bednorz in 1987, plus techniques derived from investigating spintronics – the study of electron spin across and between carefully arranged materials – IBM researchers, led by IBM Fellow Stuart Parkin, believe they are on the path to discover synthetic materials that will superconduct at room-temperature (297K or 75F). 

Stretching back to DRAM, IBM researchers have conducted thousands of experiments that control the unique electron spin activity within precisely engineered material layers. Their use of spintronics to produce sensor devices that read smaller and smaller data bits also formed the core component of Magnetic Random Access Memory (MRAM) – a non-volatile, faster, less expensive option to flash memory.

Combining Spintronics with Superconductivity

Spintronics Scientists
Kevin Roche
“We’ve gotten to the point where we understand how to manipulate spin and its behavior in artificially engineered solids,” said Roche. “Right now, the current class of superconductors work at liquid nitrogen temperatures or 77 Kelvin (-321F).

“Imagine if instead of liquid nitrogen, all we needed was room-temperature water, about 75 degrees F – that’s 400 degrees Fahrenheit higher than what is currently possible today.”

Parkin and the researchers at the spintronics lab in Almaden are studying the phenomenon of spin-engineered materials and discovering exotic behaviors – and with new classes of materials cropping up, they believe there is now enough collective knowledge about how spin behaves that they might be able to come up with a pathway to develop room-temperature superconductivity.

“Normally, electrons go through wire and they bounce around and generate heat – so you lose some of the power,” Roche says. A superconductor has lossless transmission – meaning all of the electricity goes through and no power is lost.”

The prospect of power and energy transmitted via superconductors at the temperature of water is attractive because water is easily accessible and inexpensive. If room-temperature superconductivity is achieved, superconducting materials can be used in everyday technology.

IBM Research Colloquia: Synthetic Routes to Room Temperature Superconductivity 

In a two-day workshop held October 17 and 18 at IBM Research – Almaden in San Jose, CA, chemists, physicists and theorists from academy and industry worldwide will come together for the 2012 Almaden Institute, “Superconductivity 297K – Synthetic Routes to Room Temperature Superconductivity.”

The workshop will be led by Claudia Felser, director of the Max Planck Institute for Chemical Physics of Solids in Dresden, and Stuart Parkin. Stuart also manages IBM’s IBM’s Magnetoelectronics Group, and director of the IBM-Stanford Spintronic Science and Applications Center where he is a consulting professor.

Join the conversation:  @IBMResearch #SC297K with IBM Research expert Xin Jiang, tweeting live from the event

10.15.2012

Nobel for High-Temperature Superconductivity Turns 25

Twenty-five years ago today, IBM scientist Georg Bednorz received a call from the Nobel Committee, coincidentally while he was traveling in Stockholm, to tell him that he will receive the 1987 Nobel Prize for Physics for his work in high-temperature superconductivity.
Müller and Bednorz on 17 May 2011 at the opening of the
Binnig and Rohrer Nanotechnology Center

His fellow colleague and scientist K. Alex Müller received the same call while at a conference in Naples, Italy. Perhaps Müller was a bit less surprised as he now famously said  "this paper is going to make history" to his daughter during dinner, after submitting his discovery to a scientific journal.

And it sure did – in record time. It took only 22 months for the Nobel committee to honor Bednorz and Müller with the Nobel Prize.

During the commemoration speech, Professor Gösta Ekspong of the Royal Academy of Sciences, explained, "less than two years old, it has already stimulated research and development throughout the world to an unprecedented extent."

The paper, "Possible High Tc Superconductivity in the Ba - La - Cu - O System" was first published in a German journal called Zeitschrift für Physik B. The two were keenly aware that the discovery was so incredible that they even doubted themselves. The commercial benefits stemming from superconductors are just now reaching the market.

IBM's Centennial Icon of Progress
for the the superconductor breakthrouh
For example, energy efficient, high temperature superconductor (HTS) power cables from American Superconductor are beginning to roll out around the world. In 2008, the longest and first HTS cable was installed on Long Island, New York and is currently transmitting up to 574 MW of electricity – enough to power 300,000 homes.

In the mid-western United States, the Tres Amigas Project is currently underway to link three power grids and create and nation’s first renewable energy market hub.

In the metal processing industry large machines called billet heaters use electricity to heat metals to 1,100 deg C (2,012 deg F) to soften them before processing. Using high-temperature superconductivity, the German company Bültmann GmbH has developed a magnetic billet heater that is 80 percent efficient, saving the equivalent of 800 barrels of oil per year.

In the future, Magnetic Levitating Trains (Maglev) will use on-board magnets that levitate the train above the steel rails, making them more energy efficient and faster. Initial testing of Maglev trains in Japan have recorded speeds at 581 kilometers per hour (361 mph).

While the discovery remains many years away from broader adoption, its promise is apparent and its potential seems limited only by the imagination of science.


10.10.2012

Goldberg Annual Best Paper Awarded to...

More than 110 papers in computer science, electrical engineering and mathematical sciences published in refereed conference proceedings and journals in 2011 were submitted by IBM Research authors worldwide for the annual Pat Goldberg Memorial Best Papers in computer science, electrical engineering and mathematics.

One of the winners are a team from IBM's Zurich lab in the area of tape storage. We spoke with a few members of the team to find out more.

Magnetic Tape wit h29.5 billion bits per square inch
The machine IBM used to demo a new record in magnetic tape data density of 29.5 billion bits per square inch

Q. Explain your achievement in the 29.5-Gb/in^2 Recording Areal Density on Barium Ferrite Tape paper.

Giovanni Cherubini: The paper describes a single-channel demonstration that investigated the future potential of tape from a recording physics, and track-follow-servo perspective. This includes a variety of technologies that we developed to demonstrate the feasibility of operating at an areal density of 29.5 Gb/in^2 using prototype perpendicular Barium Ferrite (BaFe) media.

This density was more than a factor of 30 higher than the areal density of IBM's Linear Tape Open Generation 5 and the IBM 3592 (Jaguar 3) tape drives -- which were state of the art at the time of the demo. The work demonstrated the potential for the continued scaling of tape, based on low cost particulate media technology for at least the next 10 years. This gives organizations and enterprises confidence to continue investing in tape infrastructure to meet their growing archiving and data protection needs

Q. Why did you select Barium Ferrite for the demonstration?

Mark LantzThe state-of-the-art media technology at the time of the demo, which is based on metal particle (MP) technology, is running out of continued scaling potential. We've previously investigated a variety of new media technologies with the potential to replace MP.  However, BaFe was found to be the best candidate that is still based on low cost particulate coating technologies.

Q. What is next for this research?

Angeliki Pantazi: The single-channel demonstration described in our paper showed the potential for future scaling, but that did not take into account several of the challenges that arise from parallel channel tape recording, and from using production level hardware (rather than specialized experimental tape transport systems).

The next step we took was to transfer the technologies developed for the single channel into production-level hardware and perform a parallel in-drive cartridge demo -- which we completed at the beginning of this year. We are now planning to continue this work, and have started working on a new single channel demo with a target areal density of 100 Gb/in2.

4. Will we ever see Barium Ferrite tape products in the market and when?

ML: BaFe media was introduced with IBM’s latest enterprise tape drive: TS1140 (Jaguar 4), and will also be introduced for use in LTO6 (the next generation mid range product), along with MP media, but these are less advanced versions of BaFe media than was used in the demo.

5. Why do we still hear that tape is no longer viable, particularly in this era of Big Data?

GC: Some have claimed that tape has been dead for more than 20 years!

Tape suffers from being a back-end solution no longer visible to the average consumer, who likely has many disk drives in products ranging from laptops to digital video recorders. Unfortunately, this lack of visibility makes such claims about tape more believable.

Despite this lack of visibility, tape still plays an extremely important role in the storage hierarchy. In fact, total tape media capacity shipped each year exceeds the total capacity of external disk systems -- which highlights how much of enterprise data actually resides on tape. One of the goals of this work was to increase the visibility of tape and clearly demonstrate that not only is tape not dead, but that it has a bright future as a scalable, low cost technology for archiving and data protection.
 

10.02.2012

New IBM Mainframe Gets Crypto Upgrade


A few weeks ago, IBM announced its most powerful and technologically advanced mainframe ever -- the new zEnterprise EC12.

The new system features state-of-the-art technologies that demonstrate IBM’s ongoing commitment to secure and manage critical information with the System z mainframe. More specifically, the mainframe includes a new cryptographic co-processor called the Crypto Express4S designed by IBM scientists in Zurich.

To help understand this innovation better we spoke with its two developers Silvio Dragone, who designed the hardware, and Tamas Visegrady, who wrote the code.

Q. What exactly does a cryptographic co-processor do?

Tamas Visegrady: It’s a device to segregate security relevant operations. This means sensitive data can be secured in a dedicated environment, reducing risk.

Silvio Dragone: Exactly. It is a card to physically control security separated from the processor. You can think of it as a PC dedicated to just cryptography.

Q. What exactly did you develop?

SD: The work was originally developed within IBM's Systems and Technology Group (STG) in Poughkeepsie, NY. We began collaborating with them few years ago, particularly as stronger requirements were being made by the EU, such as new passports, which have smart chips. Working closely with STG, we designed the architecture for the recent Crypto Express4S hardware enhancements and wrote specifications plus prototype code for the new mainframe crypto provider firmware.

TV: Another unique feature of the card was developed by our Zurich colleague Heiko Wolf. He is an expert in nanopatterning, and made the cards tamper-proof with specially designed packaging and materials.

Q. What was the biggest challenge in developing the Crypto Express4S?

SD: Well, the Crypto Express4S is the last line of defense in protecting data. And when it comes to our clients, this tends to be very sensitive data such as passports, national ID cards and financial information. So, losing this data is not an option.
The front cover of a contemporary
Dutch 
biometric passport

The biggest challenge is designing the hardware and software so it's fail proof. We refer to it as “mainframe reliability,” which is very unique. It’s not often that hardware and software guys agree on everything, but Tamas and I meet in the middle more times then not.

TV: We’ve been developing cryptographic co-processor’s for 10 years and one of the biggest challenges is to continue to push the boundaries of its development for each new mainframe.  For the 4S, it is significantly faster based on some new algorithms, but speed can sometimes become a trade-off with security. So, it's a big challenge to get them working together.

Often you read about raw speed being the key factor of a cryptographic co-processors, but that’s like saying a 32 megapixel camera is the best. There are many factors to consider and in our opinion we will accept a small drop in speed for an increase in reliability and security.

Q. So what’s next?

TV: As mainframes benefit from advancements in technology, whether it's more speed or more power, it trickles down to the cryptographic co-processor. So, we need to work this into our designs to keep up with the needs of the industries we support.

SD: We keep an eye on what is happening in terms of security threats and try to preempt any weaknesses before they impact our clients.