3.27.2012

"IT Nobel" Awarded to IBM Researcher


The IEEE Computer Society has awarded its 2012 W. Wallace McDowell Award to Ronald Fagin, Manager, Foundations of Computer Science at IBM Research - Almaden, "for fundamental and lasting contributions to the theory of databases."

Popularly referred to as the "IT Nobel," the W. Wallace McDowell Award is awarded by the IEEE Computer Society for outstanding recent theoretical, design, educational, practical, or other similar innovative contributions that fall within the scope of Computer Society interest. One of computing's most prestigious individual honors, the W. Wallace McDowell Award has a list of past winners that reads like a who's who of industry giants. They include FORTRAN creator John W. Backus (1967); supercomputer pioneers Seymour Cray (1968), Gene Amdahl (1976), and Ken Kennedy (1995); the architect of IBM's mainframe computer Frederick Brooks (1970); Intel Corp. co-founder Gordon Moore (1978); Donald Knuth, the father of algorithm analysis (1980); microprocessor inventor Federico Faggin (1994); World Wide Web inventor Tim Berners-Lee (1996); and Lotus Notes creator and Microsoft Chief Software Architect Ray Ozzie (2000).

Previously, Ron won the 2011 IEEE Technical Achievement Award for pioneering contributions to the theory of rank and score aggregation

Ron's first major contribution to relational database theory was the introduction of Fourth Normal Form, which captures crucial desirable aspects of database design. In particular, Ron's Fourth Normal Form formalizes the intuition that in a well-designed database, unrelated data should not be stored in the same table.  This normal form is now universally accepted and is included in all standard database books, from undergraduate textbooks to advanced research monographs. 

Ron (with Nick Pippenger, Jurg Nievergelt, and Ray Strong) invented extendible hashing, a database access technique in which the user is guaranteed no more than two page faults to locate the data associated with a given unique identifier, or key. Unlike conventional hashing, extendible hashing has a dynamic structure that grows and shrinks gracefully as the database grows and shrinks. Because it combines the speed of hashing with adaptable dynamic behavior, and because it is easy to understand and to program, extendible hashing is now widely studied and widely implemented. 

Ron has devised several algorithms (including "Fagin's Algorithm") for combining information from multiple sources. This work had a significant influence  on the design of the query processing component of the IBM InfoSphere Federatoin Server, and on the design and implementation of the parametric search for IBM WebSphere Commerce.

With Phokion Kolaitis, Lucian Popa, and Wang-Chiew Tan, Ron laid the foundations for the use of schema mappings, which describe how to convert data from one format to another (data in the second format is called a "solution"). The main problem is that there may be many solutions, and it is not clear which solutions are "good." They  defined a particular class of solutions, called "universal," and showed that these are the good solutions.  This paper is considered the fundamental paper in the area.  Their schema mapping work has now been implemented in multiple IBM products, including DB2 Control Center, Rational Data Architect, and Content Manager.

Ron shares a retrospective on his database journey at IBM: 
I spent the first two years of my IBM career at Yorktown, where I worked on miss ratios.  I thought I did some pretty good research that I was proud of  on miss ratios, but I found it quite discouraging that this work was almost never cited, and I got very few requests for reprints (the way it worked in those days was that someone who wanted a copy of your paper sent you a postcard, and you mailed them a copy). For family reasons, I wanted to transfer  to IBM Research in San Jose.  The good news about my miss ratio work was that I used it as a successful "job talk" to induce the folks in IBM San Jose Research to transfer me in (I was actually "traded" for someone at San Jose who wanted to transfer to Yorktown).  Once I transferred to San Jose,  I decided to try to work on something different that the world would actually care about.  I looked around the lab to see who seemed interesting, and I pretty quickly decided "Ted Codd looks really interesting."  Ted was the father of relational databases (in fact, he eventually won the Turing Award for that).  So I started working with Ted on relational databases.  To my delight, all of a sudden lots of people began citing this new research of mine, and I got lots of requests for preprints and reprints. Therefore, I decided to make databases my research focus, and databases have remained one of my main areas of research ever since. 

Ronald Fagin is a member of the IBM Academy of Technology. He has won an IBM Corporate Award, eight IBM Outstanding Innovation Awards, an IBM Outstanding Technical Achievement Award, and two IBM key patent awards. He has published over 100 papers, and has co-authored a book on "Reasoning about Knowledge." He has served on more than 30 conference program committees, including serving as Program Committee Chair of four different conferences.

He received his B.A. in mathematics from Dartmouth College, and his Ph.D. in mathematics from the University of California at Berkeley.

Accomplishments include:
  • IEEE 2011 Technical Achievement Award for "pioneering contributions to the theory of rank and score aggregation"
  • Winner of the 2004 ACM SIGMOD Edgar F. Codd Innovations Award, a lifetime achievement award in databases, for "fundamental contributions to database theory"
  • IEEE Fellow for "contributions to finite-model theory and to relational database theory"
  • ACM Fellow for "creating the field of finite model theory and for fundamental research in relational database theory and in reasoning about knowledge"
  • AAAS Fellow for "fundamental contributions to computational complexity theory, database theory, and the theory of multi-agent systems"
  • Named Docteur Honoris Causa by the University of Paris
  • Named "Highly Cited Researcher" by the Institute for Scientific Information
  • Recipient of Best Paper awards at the 1985 International Joint Conference on Artificial Intelligence, the 2001 ACM Symposium on Principles of Database Systems, and the 2010 International Conference on Database Theory

About IEEE: With nearly 85,000 members, the IEEE Computer Society is the world's leading organization of computing professionals. Founded in 1946, it is the largest of IEEE's 38 societies. The Computer Society is dedicated to advancing the theory and application of computing and information technology. 

Big Data University


Rutgers and IBM team on a new High Performance Computing Center. 

Soon to be armed with a new IBMBlue Gene/P high performance computing center, Rutgers University will crunch big data from the life sciences to finance, and even do some climate modeling.
Partnering for analytics

RDI2 will be one of only eight of the nation’s 62 scientific computation centers with an industrial partnership programs.

The advisory committee for the center – Rutgers Discovery Informatics Institute (RDI2) – is already looking into providing higher fidelity climate modeling for the state.

Another example of the center’s potential use is the molecular modeling and data analysis of the influenza virus for better vaccine development. With the H5N1 avian flu and H1N1 swine flu co-circulating, health officials worldwide are concerned of a potential pandemic. RDI2 will provide scientists from Rutgers University, pharmaceutical companies in New Jersey, and IBM a way to model the influenza evolution pathways; predict the antigen-antibody bindings; and analyze the big data from both virus’ sequence databases and their structural conformations, from atomic level modeling.

High(est) Performance Computing in New Jersey

The Blue Gene system will deliver tens of Teraflops of compute power when completed. Rutgers expects RDI2 will house one of the most powerful academic supercomputers available for commercial use when fully built out – providing HPC resources via the cloud to Rutgers faculty members and regional organizations in need of better ways to analyze extremely large data sets.


RDI2 will not only work with private and public organizations, but also work train students in advanced analytics. As interest in and usage of the center grow, we also envision upgrading the hardware to Blue Gene/Q systems to offer hundreds of Teraflops of power. 

3.26.2012

26 years at IBM Research - Zurich. A personal summary

Dieter Jaepel retires from IBM

Dr. Dieter Jaepel, computer scientist and executive briefer at the Industry Solutions Lab, which is part of the IBM Forum Center organization in Europe, looks back on a rich and varied career.

Q. What brought you to the IBM Research – Zurich lab?

DJ: When I joined IBM in 1986, the idea was to strengthen the software work at the Zurich Lab. For me that was quite a change of pace. I had been doing research work to build sorting machines for the post office, with my contribution being the logic that captures the address field on an envelope and computes the control signals for the sorter.

Q. How did you approach the task of strengthening the software work?

DJ: It required coming up with a solution design—in today’s parlance—and, once you had that, to make a conscious decision about how to implement the various parts with a set of performance characteristics in mind.

Those parts that had a rather fixed design were clearly candidates for hardware implementations; the ones that required a high level of flexibility needed to be implemented in a “softer” way.

Q. So there was already a clear separation between hardware and software at that time?

DJ: As a matter of fact, I arrived at the Zurich Lab with two basic convictions with regard to my work: Not to separate IT systems a priori into hardware and software. Secondly—but equally important—that the greatest source of interesting research topics is in the area of applications.

So, if you will, my entire career at the Zurich Lab has been guided by these two principles.

Q. What were some of your first research interests?

DJ: In the beginning, I worked on telecommunications. That was the time in which LANs and MANs started to emerge, and it was obvious that these networks had to be entirely digital because multimedia applications were considered to be the driver of broadband technology. At that time, IBM thought communications solutions were a good business to be in. But the markets had different ideas.

Q. So you had to change fields?

DJ: The big opportunity for me to take a different approach came somewhere around 1993. We had just finished a piece of research work with Nokia about the new GSM networks. That work had resulted in a live prototype showing the integration of different categories of data communications such as interactive host access, fax, and text messages into the GSM architecture.

Q. What effect did that have within IBM overall?

DJ: More or less coincidently, two major things happened at the same time: Lou Gerstner became CEO of IBM, and the insurance market became deregulated. Gerstner wanted IBM Research to add a focus on industry solutions.

The insurance industry was concerned about the implications of deregulation. In a way, this was a perfect match in the sense of my two guiding principles: there was an industry issue, and it was obvious that we needed to approach it without differentiating up front between software and hardware.

On the surface, the insurance industry felt that the one element they had to consider as part of any approach was mobile computing, exploiting GSM and the new laptops.

Q. In your opinion, who at the Zurich Lab had the greatest influence on you and your work?

DJ: That would be Karl Kuemmerle, director of the IBM Zurich Lab at that time, who simply launched me into this new direction by introducing me to the client. I never learned what his exact motivation was, but I liked the idea from day one.

The interactions with the customer led to a pilot project to demonstrate how mobile systems can be integrated into the existing core systems. Luckily, the customer was willing and prepared to discuss how to restructure their whole solution architecture, and the project was extremely successful.

That customer still uses the architecture today, and I just recently learned that the last part of their application portfolio has now been integrated—after more than 15 years.

Q. How did this customer-focused approach change IBM’s business?

DJ: Lou Gerstner created an organization called IBM Industries, one of which was the Insurance Industry. These groups supported the so-called Insurance Research Center (IRC), with Dan Yellin heading up the group in Hawthorne, and me leading the team in Zurich.

At that time, the Industry Research Center was charged with studying the industry context in detail, and mapping it to study the impact of IT on the business architecture.

The validity of the findings were to be demonstrated by pilot projects, known today as FoaKs (First-of-a-Kind projects). Bottom line: we were looking for a combination of business research and technology research.

Q. How did the insurance industry react to the new products you developed?

DJ: I’ll never forget attending Insurance Directors Conferences in Venice, Prague, and other places, always with a couple of brand-new prototypes in my luggage. Conference participants were extremely keen to have a look at every new development.

Q. Is this what led to the establishment of IBM’s Industry Solutions Lab, the executive briefing facility to connect researchers with the market?

DJ: The creation of the Industry Solutions Lab emerged more or less as a logical extension of that.

Today it’s different of course. The Institute for Business Value now produces the studies. Pilot projects are now generally part of the FoaK program, which is still under the umbrella of ISL.

Q. Apart from a long and extremely productive career within IBM, you are also a very accomplished musician. I understand you play the viola professionally. Is this something you plan to pursue more actively after your retirement?


DJ: Now that I’m retiring, I’m not only looking back but also forward to the next phase of my life. With the Swiss Alps at my doorstep, I'll certainly do a lot of hiking. Beyond that, I’ve already received several requests to help out in various musical groups—yes, I’m a trained viola player.

Model railroading, another passion of mine, will give me a chance to tackle some engineering challenges. I'll see how I can fit all of that in with the various activities and interests of my family.

3.21.2012

Looking back on 25 years of service to IBM

Christophe Rossel, physicist at the IBM Research – Zurich Laboratory, shares some thoughts and insights on a quarter century of service.

Q. Christophe, congratulations on your 25th service anniversary. What brought you to the Zurich Lab?

CR: I was attracted to the reputation of IBM. After doing a post-doc, I had been working on superconductivity as a research associate at UCSD in San Diego, California, and I had contacts with several IBMers at the Almaden and Watson Labs.

Christophe RosselAfter four years I started to look for career opportunities in Europe to be closer to my family. Having had such a good experience in the US, working for IBM seemed a perfect match.

Research at the Zurich Lab was so famous that I was happy to join the Physics Department — now called Science & Technology — in 1987. I first worked on resonant tunneling in III-V heterostructures with Pierre Guéret, and in parallel went on with my activities in superconductivity. Alex Müller and Georg Bednorz had just won the Nobel Prize in Physics for their discovery of high-Tc superconductivity, which was really an exciting time, and so the IBM Zurich – Research Lab was the place for a young physicist to be.

Q. You originally come from Neuchâtel in the francophone region of Switzerland. Was it a big adjustment to cross the infamous "Rösti Ditch"?

CR: While working on my PhD in Geneva I never would have considered moving to the German-speaking part of Switzerland, although I had been exposed to the culture by my mother, who is originally from the Grisons.

But as seen from the US perspective, the distance from Geneva to Zurich is negligible and so I didn’t think twice before moving to Rüschlikon.

By the way, my wife is also a German-Swiss and so I have long known that there is life beyond the Rösti Ditch (laughs).

Q. Could you describe a couple of ways in which the Zurich Lab has changed since 1987?

CR: Well, it’s grown, for one thing.

Another thing that has changed dramatically in the past 25 years is the pace of work as a result of the incredible advances in computing and communication technologies. Just think of the number of emails and conversations we exchange every hour with our laptops and smartphones. This was inconceivable 25 years ago!

The main change, though, is that IBM has become more of a team-oriented company with stronger interdisciplinary interactions within the Zurich Lab as well as throughout the IBM Research division. This has had a very positive effect because IBM Research is extremely rich in terms of expertise.

Back then, you typically had one research staff member and one technician pursuing their own research projects. Today, research is also much more applied, business-oriented and influenced by the search for external funding and collaborations.

Q. What have been some of the highlights of your career so far?

CR: Clearly the period of the high-Tc superconductivity boom, when so much attention was focused on Zurich, was a highlight.

Another one was certainly the Outstanding Achievement award I received for my work on torque magnetometry used to measure the properties of superconducting microcrystals.

I’ve worked on many different aspects of experimental condensed-matter physics and materials science throughout my career, and it’s this opportunity to pursue different challenges as they arise that makes our work here so compelling, even after 25 years.

Always reinventing yourself is a real challenge and an opportunity not only for the company but also for us employees.

I have also enjoyed organizing and participating in conferences and workshops as well as serving on various editorial boards of scientific journals.

Q. What do you enjoy most about the working environment at the Zurich Lab?

CR: What I especially appreciate is the intensive contact with the many post-docs and students who pass through our Lab. They bring fresh ideas and new thinking, and we are in a fortunate position to interact with them without having teaching obligations like at universities.

I’d also like to mention how much I appreciate the support I’ve always enjoyed for my work within the greater Physics community. For example, I served as the executive secretary of the European Physical Society for 5 years, which opened lots of networking possibilities. Four years ago, I became president of the Swiss Physical Society.

Those are further highlights of my career, and I’m grateful that I had the opportunity to pursue these interests and cultivate these relationships. They’re a good balance to my research work at IBM.

It’s so important to interact within the scientific community, not just within one’s own company.

Q. Here’s a photo of someone I’m sure you’ll recognize. If you could give this young man some advice, what would you tell him?

CR: (Laughs) Wow, I look so young! Well, let’s see. I would encourage him to keep his enthusiasm and passion for science. Life is too short to waste on something you’re not passionate about. Have fun, face the challenges, and share your enthusiasm!

3.20.2012

Creek Watch iPhone App Goes Social


Editor’s note: The following article is a guest post by Conservation Biologist Erick Burres. He leads the state of California’s Citizen Monitoring Program: the Clean Water Team.

Water is one of our most precious resources -- vital for our survival. This week, its significance of those facts is highlighted in several commemorations: World Water Day and World Water Monitoring Day on March 22, and Earth Day (International) on March 27

Protecting and improving water quality are extremely important issues for all societies. And now with the Creek Watch iPhone App, every time a person sees a stream they can take a photo of it to help collect water quality data such as: 

1. What is the amount of water?
2. What is the rate of flow? 
3. And what is the amount of trash?

… all based on location and the photograph.

Here's a video on how it works :



Citizen Science

Over the years, water quality concern has prompted a rise in citizen science of monitoring streams, rivers, lakes, estuaries and oceans. These activities have included collecting water quality data, evaluating fish and wildlife habitat, or making visual observations of stream health. 

In California, as in other regions, citizen water quality monitors have been collecting water quality data for more than a decade, saving government agencies many tens of millions of dollars in monitoring costs and providing water quality data that would otherwise not get collected. Citizen data are being used to guide local watershed management and are a critical element of regional and statewide assessments of surface water quality for drinking, fishing, swimming, ecosystem health and other beneficial uses.  

The Creek Watch App is useful in promoting all of these efforts. It turns all of us into scientists, contributing water quality data -- data that can lead to increased understanding and protection of the very water that we use and need. Creek Watch as a learning tool introduces people to their streams and water quality concepts. It is also great crowd-sourcing tool that collects much needed water quality data from around the world. Additionally, Creek Watch provides a tool that enables individuals and groups to build monitoring programs with, and answer local questions about, water supply and water quality.  

Water quality depends on all of us. So join the Clean Water Team by downloading the Creek Watch App to start monitoring the creeks, streams and rivers in your community.

Erick Burres is a conservation biologist that has been involved with many projects benefiting the protection of terrestrial and aquatic habitats, the species that depend on those habitats and the recovery of threatened and endangered species. Mr. Burres has a Master of Public Administration degree from California State University Long Beach and a Bachelors of Science in Zoology from San Diego State University. He has worked for non-profit organizations, private businesses and government agencies. His goals have been to actualize constructive and sustainable environmental stewardship activities by enrolling citizens in conservation management activities. Currently, he lives in Los Angeles and leads the state’s Citizen Monitoring Program, the Clean Water Team. An app like Creek Watch was a dream of his and on the Clean Water Team’s wish list. He is thankful that IBM created Creek Watch and shared an interest in promoting environmental education, citizen monitoring and water quality.

How to share your Creek Watch findings with your social networks:




3.14.2012

What works best for patients like me?

Clinical genomic analytics platform for decision support takes some of the guesswork out of medical treatments

On March 14, IBM announced the creation of a new clinical genomics analytics platform to help physicians and administrators at Italy's Fondazione IRCCS Instituto Nazionale dei Tumori make decisions about which treatments could work most effectively for individual patients.

INT physician using the system
This effort in personalized medicine, or choosing specific treatments based on a patient’s personal genetic profile, is an important new area that could take some of the guesswork out of treating diseases like cancer and AIDS.

"Data from patients is a gold mine that helps us discover ways to treat other cases more successfully," said Boaz Carmeli, a healthcare researcher at IBM Research – Haifa. Carmeli's team has been leading the research into how clinical genomics can use data to get deeper insights into medical processes and how computers can be used to evolve and improve those processes.

"Before beginning this research, I always thought that when we get sick the doctor will tell us what treatment will work best," said Carmeli. "In reality, although we can have one diagnosis, there are many treatment options available. Choosing the best one depends on a huge number of factors, including our genetic profile, age, weight, family history, general health, and the current state of the disease."

The new Clinical Genomics analysis platform, also known as Cli-G ('clee-gee'), integrates and analyzes clinical data and evidence from patient records and incorporates expertise gathered from leading medical specialists, clinical healthcare guidelines, and other sources of knowledge.  Clinicians access the system through a standard web browser, which offers a simple and intuitive graphical user interface.

Boaz Carmeli, IBM researcher
"Most physicians base their treatment decisions on guidelines from what is known as 'evidence-based medicine'," explained Carmeli. "These guidelines stem primarily from the results of clinical trials, and help guide doctors with rules for what treatment works best. But how relevant are these rules for an 80-year-old woman where no treatment at all may represent the best option?

“The clinical trials don't cover all populations and other evidence doesn't take into account factors such as the patient's emotional state, lifestyle, family history, or genetic profile."

In 30 to 40 percent of the cases, physicians formally declare that they are recommending treatment options outside the evidence-based medicine guidelines. The IBM solution gathers data from hospitals to gain insight into what was done in these cases, alongside information from other knowledge sources – including the physicians themselves.

"When we started the project, we had many questions on when and why physicians stay within – or diverge from – the guidelines," said Carmeli. "There are no clear answers, but we do see many different personalities, disciplines, hospital cultures, and patients. So, the alternatives are different for everyone."

By analyzing the cases, identifying trends, and then introducing formalistic tools, IBM researchers aim to provide insight into treatment options and allow the physicians to investigate the reasoning behind these options.

Battling breast cancer with genetics

One example of how this solution can help is in the area of breast cancer. Medical research discovered that certain genetic sequences can indicate whether a breast cancer patient is predisposed to respond positively to chemotherapy – but the test used to identify this genetic sequence is expensive and hospitals are not sure it's worthwhile.


Today, adjuvant chemotherapy or hormonal therapy helps treat breast cancer in about 30 percent of the cases. Yet treatment is being given to 80 percent of breast cancer patients. As a result, more than half the patients are receiving treatment that will not help them. Using Cli-G, medical staff can verify whether the tests can accurately identify the different cases and predict the effectiveness of the treatment.

"Computers are used nowadays to provide support and assistance wherever possible," explained Carmeli. "We don't expect to know or remember everything, but many of us walk around with mobile devices that allow us to have extensive knowledge at our fingertips.

"The same support can make healthcare work smarter … We envision a world in which physicians also have immediate access to the value of decision support provided by technology. In the end, this ultimately enables patients to get the best possible treatment."

For more about this clinical genomics solution, read the Smarter Planet's article.

3.09.2012

SXSW Spotlight: Data as Narrative



Editor’s note: This brief Q&A series will feature IBM researchers making presentations at the 2012 South-by-Southwest Interactive Conference in Austin, Texas.

Join the conversation: #sxswibm #timeMap

Research Scientist Dr. Jennifer Thom, a member of IBM’s Visual Communications Lab, is part of the Maps of Time: Data As Narrative panel – a discussion about understanding and visualizing data over time – on Monday, March 12.

Q: What are the kinds of thing you work on as a researcher at IBM's Visual Communications Lab?

I study online social behavior and focus on how social media can influence and create new opportunities for collaboration. I’m especially interested in how these tools can help distributed groups – who speak different languages and work in different time zones – work together more effectively.

Q: What's an example of a data visualization project you've worked on?

Social media is incredibly multilingual, and the systems within IBM are no exception. Many IBMers contribute to our social tools in many native languages, which is helpful for creating community for a global enterprise. At the same time, there are multilingual IBMers who have varying levels of fluency between the different languages they speak, yet they encounter and consume content in their non-native languages in their daily work.

I was interested in improving the reading experience for multilingual IBMers – especially since an increasing amount of information is shared in these spaces as our business becomes more social.

In collaboration with a graduate student at MIT, we’ve worked on visually transforming online blogs to improve the reading experience for IBMers who consume content in their non-native language. From user data that we gathered, we developed a set of design criteria to reduce visual distractions in order to improve scanning and skimming of social software content.

Initial evaluations of this approach are promising, as global IBMers have indicated that this approach has made the reading experience of this content more enjoyable.

Q: At this year's SXSW, you're on a panel about "Data as a narrative." How does this idea go beyond something like Facebook's timeline?

I’m interested in using the past to help predict the future and looking at how we can learn from the data that we share on these social media systems to solve problems.

For instance, one recent project that I completed looked at the #stuffibmerssay meme that emerged on Twitter at the end of 2011. A small number of IBMers spontaneously created this hashtag to append to humorous tweets that commented on different aspects of life as an IBMer.

The number quickly grew over a period of few days, where different IBMers contributed their perspectives and experiences and created a shared experience for IBMers who often work in diverse environments around the globe. When we looked at the content of the tweets more closely, we realized that collectively they helped expose aspects of our organizational culture.

Since then, we’ve been thinking about ways that we can use these tweets as a barometer of sorts: whether about IBMers, or the systems we maintain, develop and deliver. We’ve also thought about using memes as an elicitation tool to get a sense of what people are working on or thinking about.

*Note: this research will be presented at ICWSM 2012 in Dublin in June.

Q: How might the idea of cobbling together all of a person's (or company's) online data look? Would this have to be a new social network?

So, I think the project that I just described is one approach in where we can leverage people’s existing behavior. The challenge is in aggregating multiple feeds over multiple systems and helping users make sense of the fire hose.

One approach would be to create better visualizations of this data so that people can make sense of what’s being put out there. 

Q: How could a person -- or a business -- use a timeline of their online lives, while maintaining security and privacy?

That’s a huge challenge since the same aggregation can help us become more predictive can uncover patterns. 

I suspect the best answer to this is a combination of policy and design – helping individuals better opt-out of this aggregation, and trying to figure out what the right amount of noise so that the aggregation of data and timelines doesn’t make individuals so immediately identifiable.

Q: What's a favorite data visualization of yours? What makes it a good example?

I recently came across Yanni Loukissas’s visualization of the Apollo 11 moon landing and thought it was a great example of storytelling through disparate types of data linked by time.

In particular, I thought the combination of the audio channel with the output from the different computer systems involved in managing the launch was a nice integration of the social and technical aspects of complex coordination.


Apollo 11 Lunar Landing Visualization, 1969 (2011) from Yanni Loukissas on Vimeo.


Attending SXSW? Add Data as a Narrative to your schedule.

3.08.2012

Holey Optochip Transfers One Trillion Bits of Information

IBM scientists have developed a prototype optical chipset, called Holey Optochip, that is the first parallel optical transceiver to transfer one trillion bits – one terabit – of information per second, the equivalent of downloading 500 high definition movies. The report will occur at the Optical Fiber Communication Conference taking place in Los Angeles, Calif.

Read the announcement on ibm.com.

Holey Optochip capabilities
  • The raw speed of one transceiver is equivalent to the bandwidth consumed by 100,000 users at today’s typical 10 Mb/s high-speed internet access.
  • Or, it would take just around an hour to transfer the entire U.S. Library of Congress web archive through the transceiver.
  • The transceiver consumes less than 5 watts; the power consumed by a 100W light bulb could power 20 transceivers.

SXSW Spotlight: Accessible Complex Data

Editor’s note: This brief Q&A series will feature IBM researchers making presentations at the 2012 South-by-Southwest Interactive Conference in Austin, Texas.

Join the conversation: #sxswibm #accessdata

Susann Keohane and Brian Cragun, consultants for IBM Research’s Human Ability & Accessibility Center (referred to as the AbilityLab), will present Beyond a Thousand Words: Accessible Complex Data – a discussion about the accessibility challenges of analyzing, visualizing, and using today’s big data – on Tuesday, March 13.

Q: What kinds of solutions does the IBM AbilityLab develop? What is a recent example?

Susann Keohane
Our lab develops solutions to help everyone participate in technology. For example, Accessible Workplace Connections is a web-based application for employees with disabilities to have their necessary work accommodations be delivered, changed, supported and maintained effectively and efficiently.

And our Access My City application provides real-time transit data, geo-location and mapping technologies, and publicly available accessibility information, to mobile devices to help disabled residents and visitors navigate a city.

Check out some of our other research projects, here.  

Q: At SXSW, you will be discussing "Accessible Complex Data." What kinds of new accessibility challenges are being posed by complex data? 

Brian Cragun
We all struggle to find pearls in the ocean of complex data. Well-chosen graphical visualizations have the ability to communicate key information quickly.

But as generally implemented, these complex visualizations are inaccessible to the blind. For the blind, the question we are working to answer is: how can we approximate and approach the high-bandwidth understanding, and autonomous discovery of the key information the sighted gain from complex visualizations, such as stock market history, census trends, or scientific data.

Q: What about smart devices – phones, televisions, etc. – that access the data? How are they a part of making information accessible (or preventing accessibility)?

Smart devices make information available anywhere at anytime. When users move to a smart device, many will be affected by what we call "situational" disability: outside light, a tiny screen, using one hand, riding on a bumpy road, or needing to access information without touching or looking at the device while driving.

More then ever, these situations emphasizes the need for inclusive design. The research we work on for core disability types (deaf, blind, mobility impaired) will benefit all users of smart devices.

Q: How is IBM making today's flood of data, and the way it's analyzed and shown, more accessible?

This is a great question – and the core of our presentation.

In current products, we provide user interaction with graphs, allowing the user to sift, sort, scale and filter the information. These capabilities are already available for the visually impaired. Now, research is looking at navigation of the graphs with audible cues, so users can discover the visualization themselves.  

We're also looking at how to convert the visualizations into descriptive text so any user needing information in a hands-free or eyes-free environment can benefit. Technologies on the horizon, such as electrostatic screens, electrical sensations, and other tactile feedback tools will provide other sensory exploration to effectively utilized complex data.

Q: What needs to happen to make accessibility an automatic part of the process in expressing data?

Better mappings of visual information to other sensory modes need to be researched and proven.

A taxonomy of graphs and content with corresponding navigation, and audible output, can standardize interactions, and provide a foundation for new graphs in the future.

Attending SXSW? Add Susann and Brian’s presentation to your schedule.

3.07.2012

Smart water analytics – when IT meets H2O

First Of A Kind project uses smart analytics to cut water loss in Sonoma County

"It took a crew of technicians and special equipment to block off traffic, climb down under the road, and make the adjustments to the water pressure valves," said Segev Wasserkrug, lead researcher for IBM's First Of A Kind (FOAK) project that brought smart water analytics from Haifa, Israel to Northwest California's Sonoma County. "This was a real experience in 'physical meets digital'."

Segev Wasserkrug reading water data
Announced on March 7, IBM's solution for improved water pressure management is already helping Sonoma County better manage pressure, resulting in a reduction in the number of burst pipes and improved water quality.

"Finding a better way for engineers to manage the pressure in a water network consisting of pressure valves, pipes, pumps, tanks, and sensors was no simple matter," explained Wasserkrug. "If there's too much pressure, more bursts are expected, and any small leak will result in even greater water losses. When the pressure is too low, tanks may not fill to the proper level and people may have problems using their taps and showers."

Before working with IBM, engineers from Valley of the Moon Water Distribution Agency (the organization that covers Sonoma County's water management) had to manually adjust the pressure of each valve and then wait for daily readings to see how each adjustment affected other areas of the water system. Getting all 10 valves adjusted to maintain optimal pressure across the system was a time consuming and complex process that was only done twice a year -- when the seasons change from winter to summer and back to winter.

"In the summer, we see increased demand for water because of the warm weather associated with filling swimming pools, summer water activities, and the need for more irrigation," said Wasserkrug. "In the transition to winter, we see a 30 to 40 percent reduction in water use because it rains, people don’t need to irrigate the land as much, and winter activities don't typically require much water."

Now, IBM analytics provide Valley of the Moon engineers with detailed information on optimal settings for each valve based on what’s happening across the entire system so they can be adjusted as necessary. And the IBM solution is unique in its ability to manage simultaneous changes for multiple valve settings.

"I think both sides gained considerable insight. For me, going out into the field and getting readings from actual sensors was very different from seeing data on the computer. The water engineers gained confidence in our system's analytics as we began to see the results from each subsequent set of adjustments," said Wasserkrug.

Pressure valve adjustments

"We’re very pleased with the recommendations of the system. I've been operating this system for 30 years, and I never thought of making the changes the IBM system recommended," said Paul Gradolph, Operations and Maintenance Supervisor from Valley of the Moon Water District. 

“Our ongoing collaboration with IBM is a clear indication of how the innovative use of technology helps us effectively manage the resources in our care.”

The new pressure management solution transformed the water network into a proactive system instead of one that was reactive. Beyond simply tracking the data, the analytics can identify trends in demand and help the engineers anticipate upcoming changes. This, in turn, means the engineers can make adjustments in advance and prevent problems such as pressure spikes before they occur.

The IBM Research - Haifa team is now working on a generic solution that can be applied anywhere, taking into account additional types of equipment such as pumps, as well as different variables such as pumps, frequency of pressure changes, and other unique water system characteristics. The next step with Sonoma County is to develop a solution that automatically identifies leaks based on mathematical models that compare the water entering the system and the amount coming out at various locations.

“Every change we make to the system must be designed to bring about real improvement and not something fleeting," said Wasserkrug. "We have to remember that we are influencing one of the basic human needs – the water we use and drink.“

3.01.2012

Inventors’ Corner: Hall of Fame induction for magnetic memory breakthrough

Two IBM scientists, Drs. Lubomyr Romankiw and David Thompson (retired), will be inducted into the Inventors Hall of Fame on May 2, 2012 for their three U.S. patents that revolutionized data storage density and device ubiquity.

IBM has more than 4,000 active storage patents.
The patents – 4,295,173, 3,921,217, and 3,908,194 – are for techniques that produce thin film magnetic heads in storage devices. These heads greatly increased data storage density, while drastically reducing the cost, and today are used in everything from computer hard drives to commercial disk storage in digital cameras and mobile devices.



Dr. Romankiw holds over 65 patents and has published over 150 scientific papers. He is an IBM Fellow, a member of the IBM Academy of Technology, an IEEE Fellow, and an Electrochemical Society Fellow.

Dr. Thompson holds over 20 patents and has published over 30 scientific papers. He is an IBM Fellow, a member of the IBM Academy of Technology, a IEEE Fellow, and a member of the National Academy of Engineering.

Other IBMers in the Inventors Hall of Fame

Other IBM Milestones in Memory


RAMAC
The Floppy Disk
DRAM
Magnetic Tape Storage
Rewritable Magneto-Optical Disk
Racetrack Memory

Atomic Scale Magnetic Memory