5.28.2012

Olympicene: Doodle to Stunning image of smallest possible 5 rings


Scientists have created and imaged the smallest possible five-ringed structure – about 100,000 times thinner than a human hair – and you'll probably recognise its shape.



A collaboration between the Royal Society of Chemistry (RSC), the University of Warwick and IBM Research – Zurich has allowed the scientists to bring a single molecule to life in a picture, using a combination of clever synthetic chemistry and state-of-the-art imaging techniques.

"The key to achieving atomic resolution was an atomically sharp and defined tip apex as well as the very high stability of the system," explains IBM scientist Dr. Leo Gross. "We prepared our tip by deliberately picking up single atoms and molecules and showed that it is the foremost tip atom or molecule that governs the contrast and resolution of our AFM measurements."

One particular challenge for Dr. Gross and the team was that they had to abstract a single hydrogen atom from olympicene. This is not something that can be done with a pair of tweezers — when you consider that olympicene is only 1.2 nanometers wide, or 100,000 times thinner than a human hair, it is an impressive feat.




The technique was first published by IBM scientists in Science in 2009.

Read on in this blog post from Antony Williams, Royal Society of Chemistry

5.24.2012

Treating disease with real-world evidence

Data from patients' drug reactions help researchers advance personalized healthcare.

Diseases are treated based on the knowledge of statistics from large populations. For example, hospitals treat patients in different wards or buildings based on their disease. Although patients are treated as a group, they respond – or don't respond – as individuals. In fact, many patients go through trial-and-error until the right medication or dosage is found.

Businessweek noted that of the hundreds of billions of dollars spent on prescription drugs, over 40 percent went to medications that didn't help the patient. And billions more are being spent to treat adverse drug reactions and complications resulting from the medications themselves.

Healthcare professionals are responding to these error-prone and wasteful treatments by exploring "Real World Evidence." It's a term widely used within the medical field for collecting and reviewing the impact of chronic disease treatments to find the most-effective options for personalized care. At last week's Clinical Genomic Analysis workshop in Haifa, Israel, IBM Research – Haifa and the Edmond J. Safra Institute at Tel Aviv University hosted scientists and physicians working on bioinformatics research being conducted to do exactly this: offer "Real World Evidence."
LTR: Rick Kaplan, Oded Cohn,
Michal Rosen-Zvi

"The gold mine of information available to the various stakeholders in the pharmaceutical and medical industry is just waiting to be tapped," said Michal Rosen-Zvi, manager of clinical genomic analytics at IBM Research – Haifa and organizer of the workshop.

"This new trend is very much in line with the rapid development of machine learning analytics and data-mining technologies to extract insight from masses of data," Rosen-Zvi said.

"Companies can now collect information that ensures their responsibilities go beyond pre-clinical and clinical trials, and use the data collected afterwards to optimize drug usage, efficacy, pricing, and security." This provides a situation in which patients can avoid complications and pharmaceutical companies can improve business by better targeting the drugs.

Connecting genes to treatment

Gabi Barbash, director general of the Tel Aviv Sourasky Medical Center (Ichilov) spoke about the inefficient drug therapy for cancer as a motivator for personalized medicine, and the new directions in genomic-based cancer therapy. He dispelled the myth that all mistakes in gene coding cause disease. In reality, only five percent of the gene coding contains abnormalities that cause disease. These abnormalities are found in some of the gene's single nucleotide polymorphisms – or SNP (pronounced "snip").
Prof. Gabi Barabash

"Not all SNPs have diagnostic implications, but by correlating the SNPs and the disease, we can find out which genes are linked to which diseases," Barbash said.

"By comparing the genome of people suffering from a certain disease with the genome of those who don't, we can identify the SNP involved – and then try to find out whether the SNP is the cause of the disease, or whether the disease itself has changed the SNP." He sees these connections as the basics of the genome-wide studies that can help improve treatments.

Watson, IBM's question answering machine that understands natural language, is also providing oncologists at the Memorial Sloan Kettering Cancer Center with improved access to current and comprehensive cancer data and practices. The resulting decision support tool will help doctors everywhere create individualized cancer diagnostic and treatment recommendations for their patients, based on current evidence – and has the potential to include available SNP research.

Watson is already being used by other healthcare providers. WellPoint, the largest health benefits company in the US is using the technology to provide alternative options to proposed treatment processes. "Just think of what Watson can do for physicians when it comes to answering difficult questions by looking up and cross-checking information, and providing a probability of success that this is the right answer," said Rick Kaplan, newly appointed Country General Manager of IBM Israel.

Laws and regulation could make new data available

In a panel discussion on how the availability of real world evidence could influence medical practice, Dr. Nava Sigelmann-Danieli Director of Oncology Service line at the Maccabi Health Services, pointed out that the individuals participating in clinical trials don’t accurately represent real world patients. For example, most women participating in clinical trials for breast cancer treatment are between the ages of 40 and 60. But in reality, most women suffering from breast cancer are over 70 years of age.

Panelist Dr. Lior Soussan-GutmanManaging Director of Oncotest-TEVA business unit in Teva Pharmaceuticals pointed out that new regulations requiring pharmaceutical companies to share the “real world evidence” collected during and after clinical trials would open another stream of data available to healthcare providers.

This combination of research, machine learning, and new laws continues to offer a more complete – and personal – view of healthcare treatment options.

Read more about the workshop presentations, here.

Additional resources

Machine learning and data mining at IBM Research – Haifa
The Machine Learning and Data Mining group specializes in developing algorithms that learn to recognize complex patterns within rich and massive data. Read more.

IBM's Biomedical Analytics Platform Helps Doctors Personalize Treatment
Italy's Istituto Nazionale dei Tumori testing new decision support solutions for cancer treatments. Read more.

5.14.2012

White House highlights Materials Genome Initiative

Editor's note: this article is by David Turek, IBM's vice president of High Performance Computing Scalable Systems.

Today, I am participating in a White House event highlighting the first results and next steps of the Materials Genome Initiative (MGI), which President Obama announced almost one year ago.

The name of this initiative is a riff on the Human Genome Project because it intends to marshal and organize significant scientific resources to gain a deep understanding of the structure and behavior of a vast array of materials. The goal is to help U.S. companies become more economically competitive by the application of  discoveries in materials science to the development of new and improved products in a host of industries at a far greater speed and much lower cost than is currently possible.   

IBM is well aware of the challenges in advancing materials science. IBM Research started the Battery 500 Project in 2009 to develop a new type of lithium-air battery technology that is expected to improve energy density tenfold -- dramatically increasing the amount of energy these batteries can generate and store. And we invented semiconductor silicon germanium, laying the groundwork for explosive advancement in wireless products.

There are a host of other projects in materials science that could lead to new desalinization membranes, development of biopolymers for medical applications and new materials used to break the memory bottleneck in advanced computers. The list goes on for considerable length, but the pioneering insight from IBM has been to advance material research by linking experimental techniques with large scale simulation and modeling.     

To realize the goals of the MGI, it is essential that we build the right kind of supporting infrastructure. It needs to have three key characteristics:
  • Massive compute power: Deep understanding of  materials depends on an understanding of molecular structure and behavior under a wide array of stresses and forces. Modeling and simulation at the atomic level has been shown to generate keen insights into many materials, but massive amounts of compute power is often required. Using powerful supercomputers like the Blue Gene system has been very effective in conducting these types of simulations because we are able to explore millions of atoms in models of diverse materials. As the scale of the problem increases, understanding of  the macro behavior of the target material deepens and the path to commercialization accelerates.  
  • Built for data and analytics: Modeling does not occur in a vacuum; the data that describes the underlying processes must be accommodated in the MGI infrastucture. In some cases, the sheer volume of data will present challenges to store and analyze. In other cases, the complexity of the data will warrant new models of organization and analysis. In all cases, the MGI infrastructure must ensure that data, analytics, modeling and simulation are inextricably linked in a way that leads to near real time understanding of very complex scientific problems. Compressing time to solution is what leads to competitive advantage.  
  • Collaborative: Material scientists must have the ability to collaborate widely.  Sharing data and compute resources is a foundational requirement of the MGI. The computing infrastructure enabling this will also need to be secure and accommodate the presence of proprietary processes and insights from many of the likely industrial partners.  
Argonne National Lab (ANL) is an example of how the right infrastructure can support materials science breakthroughs. Their IBM BlueGene/Q supercomputer, named Mira, will be a 10-petaflop computer -- meaning it will be capable of performing 10 quadrillion calculations a second, making it one of the fastest in the world. ANL researcher Larry Curtiss plans to use this added compute power to the aforementioned Battery 500 project.

A key factor in these kinds of experiments is being able to use enough atoms that scientists get a realistic response from the simulation. Working with catalytic processes, for instance, the team at ANL has been able to model reactions involving about 1,000 atoms. With Mira, they’ll be able to model reactions involving tens of thousands of  atoms.

Over at the Lawrence Livermore National Lab (LLNL), they will soon flip the switch on a 20-petaflop IBM Blue Gene/Q supercomputer named Sequoia. In 2005, researchers from LLNL and IBM were awarded the Gordon Bell Prize for pioneering materials science simulations, and the level of performance achieved, conducted on the Blue Gene/L supercomputer at LLNL. In that project, simulation capability was increased from thousands of atoms to millions of atoms, and the simulations still took many hours. With the advent of Sequoia, LLNL scientists will be able to run the same simulations in a few minutes -- or increase the fidelity of the model by adding 100s of millions more atoms.  

In the late 1990's when the Human Genome Project was coming to fruition, a whole new industry -- bio-informatics -- was born. Hundreds of new companies burst into existence seemingly overnight. My belief is that we are at the cusp of a similar phenomenon with the MGI and IBM plans to be present at the dawn of a new age in materials science.

Other Materials Genome Initiative Projects

World Community Grid: Clean Energy Project

Simpler tools for more complex systems

Editors note: This blog entry is authored by Gabi Zodik, Department Group Manager of Software and Services at IBM Research – Haifa.

Systems such as planes, cars, or air traffic control are becoming more and more complex. Although they now provide us with functionality, efficiency, and productivity never before imagined, they are also introducing new engineering challenges. This is especially true in the design and development of engineering systems where the integration of different disciplines — software, hardware – is required.

For example, 10 years ago cars had one or two processors, whereas today a single car may have more than 100 processors running anything from Bluetooth connectivity to proximity sensors. We are developing new methods and tools to help designers cope with the complexity of making all of these things work together, by automating and streamlining the design and development phases.

Streamlining design for systems and software


One of two system complexity problems we're tackling is system design. Even the best engineers need to spend days or weeks testing possible design options to find the best ones. Looking at the car again, when designing a car, an engineer has to choose which kind of exhaust system is best, while taking into account engine performance, exhaust pressure, temperature, vibrations, and more.



IBM expects the market opportunity for embedded systems to reach billions of dollars per year.

Our new design space exploration tool helps ease this challenge by automatically exploring different design options, while taking into account the different parameters and constraints involved. The system engineers get a reduced collection of the optimal and practical solutions to choose from based on their experience – all in minutes.

Although optimization solutions of this type are already used to solve work shift scheduling, transportation or finance problems, this is the first time they're being used in the world of systems engineering to automate the design process. IBM expects the market opportunity for embedded systems to reach billions of dollars per year.  

A fusion of development and operations efforts

Attending Innovate 2012? Join us for Research Day on June 3
These sessions will include talks such as "Smarter System Engineering: How System Analytics is Changing the Role of the Systems Engineer," and "Weaver: Advanced DevOps Platform"
We also developed a tool, called Weaver, that eases the hand-off between application developers and system administrators. A developer may not know how the software will be used, the hardware it will run or, or the operating environment. And an administrator may not have the expertise to debug the software or maintain it. As a result, deployment can mean serious overhead cost associated with testing, planning the deployment, finding workarounds for issues, and encountering bugs for the first time.

Weaver brings together two formerly separate processes. This new approach combines software development with a programming and modeling environment to develop the infrastructure on which the software will be deployed. Created in parallel to the software itself, this environment defines all the deployment platform characteristics such as IO, memory requirements, disk size, and anything else needed for the operating environment or virtual environment. By doing all of this in parallel, everything from diagnostics to testing the deployment process becomes much more efficient. 

In short, we're creating more automation and more efficiency in the design and development of complex systems. These trends are just a few of the topics being presented at the Research sessions for INNOVATE 2012 on June 3 – 7 in Orlando, Florida.

5.11.2012

Fifteen years after Deep Blue's chess victory

On May 11, 1997, IBM supercomputer Deep Blue made "man versus machine" history by winning a six-game chess match against a grand master with two wins, one loss and three draws. The technology went beyond playing chess, and  was applied to financial modeling, molecular dynamics, and to develop new drugs.

Want to know more about the game, and the technology inside Deep Blue? One of its original developers Dr. Murray Campbell is hosting a Twitter chat today at #deepblue, from 1:00 -- 2:00 p.m. US Eastern.

Research scientist Dr. Murray Campbell on Deep Blue


5.09.2012

IBM's pioneering text mining research effort honored in Japan

In 1997, a team of researchers at IBM Research - Tokyo invented TAKMI, a technology that can read and uncover trends from the avalanche of information in natural language format. The Ministry of Education, Culture, Sports, Science and Technology of Japan recently honored the research team for its contribution in pioneering text mining technology with the 2012 Commendation for Science and Technology.

TAKMI (Text Analysis and Knowledge Mining) is a text mining technology that goes beyond search -- analyzing data from structured and numerical, to unstructured and text-based. It looks for unknowns by mining data such as email, product reviews on the Internet, memos, and other written documents.

“[What] unstructured information can tell you is the answer to questions you didn’t even know you needed to worry about. It lets you know what you don’t know,” said Scott Spangler of IBM Research - Almaden and co-author of Mining the Talk: Unlocking the Business Value in Unstructured Information.

Award Recipient (From left) Tetsuya Nasukawa,
Kohichi Takeda, Seiji Hamada,
Hiroshi Kanayama and Hideo Watanabe.
TAKMI also incorporates grammatical relationships into its analysis. Analyzing the Japanese language was a challenge for the research team because it does not contain white spaces as word separators, like English. The researchers used a natural language processing technique called dependency parsing that identifies which word is the subject, the verb, the object, and also examines the relationships between words. This technique was also used to help IBM Watson, the DeepQA system, learn natural language written in English. 

Today, the text mining technology pioneered by IBM Research is widely applied to industries including manufacturing, finance, insurance, broadcast, telecommunications and retail industries to help improve customer care, product and services quality, and expand business opportunities.  

Last year, the award was given to the IBM's accessibility research team led by IBM Fellow Chieko Asakawa in recognition of their contributions in the development of a voice browser for the visually impaired, which has since become the foundation for Web accessibility research and development, and for accessibility legislation and standardization around the world.

5.01.2012

Diagnosing psychosis with word analysis

Editor’s note: This article is by Dr. Guillermo Cecchi of IBM Research’s Computational Biology Group. 

Analyzing the spoken words of people with mental health disorders could significantly improve the accuracy of diagnosing mania and schizophrenia. In a PLoS ONE paper, my Computational Biology team collaborating with researchers and clinicians in Brazil showed that quantifying and graphing only speech was 93 percent accurate in identifying these cases of psychosis. 

This collaboration with professionals across medical, neuroscience, and technical departments at Brazil’s Federal University, and Universidade de Sao Paulo was the first time that psychiatric differential diagnosis was implemented directly from speech analysis. In other words, our study, Speech Graphs Provide a Quantitative Measure of Thought Disorder in Psychosis, was the first to relate thought disorder with mathematical structures – graphs.

Word graph
We transcribe the speech to text,
and create graphs in which nodes
denote words, and edges
between them indicate the
temporal succession of the wor
Diseases such as cancer have clear genomic and proteomic signatures, while psychiatric conditions are more elusive, and may be mostly determined by functional disruptions (problems with our human “software” versus our “hardware”). We set out to show how psychiatry can benefit from computational insights.

So what did we do, and what did we find?

Psychologists at Federal University interviewed hospital patients using standard diagnostic methods, according to the Diagnostic and Statistical Manual of Mental Disorders requirements. The IBM team wanted the text. And after the interviews were manually translated into English, we analytically confirmed – through graphs – the qualitative features of mania and schizophrenia.

Manic graphs are more verbose and contain more loops (when the patient’s train of thought continually return to the same concept) than a normal graph. Schizophrenic graphs are less verbose, but more tangential (when a patient’s focus on one concept consistently changes to many other concepts) than normal.

Traditional interviews consider a handful of scales that quantify the severity of symptoms, with final diagnosis resting with the judgment of the psychiatrist. This method is about 62 percent accurate. Taking only patterns of words – how many words were spoken; how quickly they were spoken; how topical they were – our study’s diagnosis was 93 percent accurate.

Graphs between schizophrenic, normal and manic
Speech graph analysis in schizophrenia, mania and control reports. A) Subjects were asked to report a recent dream. Each report was transcribed and parsed into canonical grammatical elements (words translated from Portuguese, elements separated by slashes). Parts related to dreaming (blue) were sorted from parts related to waking (red), which were considered deviations from the anchor topic. B) Speech graph from the example shown in A), with edges sequentially numbered. The node ‘‘I’’ appears 3 times in the dream sub-graph (‘‘I walked’’, ‘‘I found’’, ‘‘I hugged’’), and then once in the waking sub-graph (‘‘I woke up’’). C) Speech graph examples representative of the schizophrenics (subject MG), manics (subject AB) and controls (subject OR). Graphs plotted using global energy minimum (GEM). The complete database is available as Supporting Information. doi:10.1371/journal.pone.0034928.g001
The difference is purely due to psychiatrists’ use of other factors to make a diagnosis.

Psychosis is part of the spectrum of thought disorders, and the most conspicuous symptoms are expressed in language. Today, the main tool for diagnosis is the personal interview, and a doctor’s assessment of abnormal thought processes reflected in speech.

Words are the most-prominent variables when talking about manic and schizophrenic conditions. We want to establish variables and boundaries – such as the number of words to indicate a condition – that could be put into a technology that will provide clinicians, as well as researchers, with a more quantitative look at their data so that their diagnosis and treatment decisions, which ultimately rest with them, can be better informed.

We are also engaged in extending these initial results to larger cohorts, as well as other modalities of thought and emotional alterations, such as autism and Asperger’s. Preliminary indications show that semantic measures of similarity between words (as opposed to the speech structure revealed by graphs) can be used to help diagnose these other psychiatric conditions that affect emotional processing.

Read the complete report, here: Speech Graphs Provide a Quantitative Measure of Thought Disorder in Psychosis.