12.22.2011

The Payoff from the IBM-Google University Research Cloud

Scientists love it when their work has a direct impact on society, so Naphtali Rishe, director of Florida International University’s High Performance Database Research Center, is thrilled that much of his lab’s data is used by real estate professionals and urban planners. Check out one of his Web sites, which shows detailed maps of Miami-Dade County real estate sales data. Anybody who wants to can dive into the data in search of valuable pieces of information within a sea of geographic sales trends.

One particularly interesting project that Rishe and his colleagues themselves undertook was analysis of the impact of the Gulf oil spill on real estate values on Florida’s west coast. . At a time when a lousy economy had depressed housing values, the Gulf spill drove them down even further.

Rishe is just one of dozens of university researchers who have taken advantage of computing resources made available through the IBM/Google Cloud Computing University Initiative since the program was launched four years ago.  “This is bread and butter for researchers like me,” says Rishe.

Google and IBM are in the process of completing the program now that high-performance cloud computing clusters are so widely available to researchers at reasonable costs.

With funding help from the U.S. National Science Foundation, the cloud computing initiative provided assistance to hundreds of university scientists working on research projects that could help us better understand our planet, our bodies, and pursue the limits of the World Wide Web. Overall, more than 1,328 researchers performed more than 126 million computing tasks on the IBM/Google Cloud. Researchers using the cluster have produced 49 scientific publications, educated thousands of students on parallel computing and helped support numerous post-doctoral candidates. Researchers have used the program for such diverse fields as astronomy, oceanography and linguistics.

For instance, researchers at the University of Maryland used the cloud to greatly reduce the processing time for sequencing an organism’s genome, and scientists at Kyushu University in Japan used it to produce a 3D map and satellite navigation system.

Rishe and his colleagues at FIU used the technology frequently to help them develop one of their primary initiatives, TerraFly, a tool for visualizing and querying geospatial data. The TerraFly data collection includes 7cm to 1-meter aerial photography of almost the entire United States, street vectors, parcel polygons, U.S. Census demographic and socioeconomic datasets, daily feeds from NASA and hundreds of other datasets.

12.19.2011

IBM 5 in 5: Mind Reading is no longer science fiction

Brown wearing the EPOC headset.
Editor's note:  This post about IBM's 5 in 5 prediction of mind reading technology is by Kevin Brown of IBM Software Group's Emerging Technologies.

One of the many great things about working with the Emerging Technology Services team is that I am always focused on “what’s next.”  For a long time speech recognition fitted into this category as the computing industry looked to make technology more pervasive to free our finger tips from typing and to help us become more productive.
 

We are benefitting from this today with voice recognition for our cars, smartphones and even automated phone services for banks and travel reservations.

Now that speech recognition is becoming mainstream, and many other forms of human computer interaction have come along, like touch, gesture recognition, etc., we are thinking about what’s next - or in the case of the IBM 5 in 5 - what's next by 2017. In my view there will be huge leaps made in bioinformatics - this is a large topic, so I am more specifically referring to the use of sensors to understand our thoughts.

No longer just wishful thinking


While much of the brain remains a mystery, progress has been made in understanding and reading electrical brain activity were we can use computers to see how the brain responds to facial expressions, excitement and concentration levels, and the thoughts of a person without them physically taking any actions.

So the idea is to use these electrical synapses to also do everyday activities such as placing a phone call, turning on the lights or even in the healthcare space for rehabilitation. In fact, that is what initially inspired me to look at this field more closely.




In March 2009, Shah, an IBM colleague, had a stroke which left him completely paralyzed, unable to use his muscles, and without the ability to speak. His brain however was working fine - a condition called Locked-In Syndrome, which means he can only communicate with his eyes - looking up for yes, and down for no.

Coincidentally, my wife happened to be his occupational therapist and I demonstrated to her a device that I had recently been investigating called the EPOC from Emotiv.  The device has several sensors sitting on your head, that actually read electrical brain impulses. You can train the device so that by thinking a particular thought, an action can take place on your computer.  So for example, using Emotiv's software, you can see a cube on your computer screen and think about moving it to the left, and it will. While I was initially interested in connecting it to email systems and smartphones for business users, it immediately became clear to us how this could help Shah.

Shah being a techie himself was open to testing it out.  Amazingly, after only 8 seconds of training, he could move the cube at will on the computer screen.  We then connected the device to software which could eventually allow control of the environment. The concentration needed whilst operating the headset is quite a lot, however, so more development of the technology and more training in using the headset may be needed to make it entirely effective. I'm sure this will continue developing within the next 5 years.

This isn’t the only example of progress in this area.  Scientists at UC Berkeley have designed and developed a special MRI scan that can model our visual thoughts both while we are awake, but even more intriguingly, while we are dreaming.

All in the Applications



This is a case where the technology has now become cheap enough and mobile enough to become a consumer device but it will take the development of some compelling applications and innovative, imaginative uses over the next few years to really make people eager to use it.

By 2017, like all technology, the EPOC or other similar technologies will probably get smaller.  So I can imagine it will have completely dry sensors, and I'd be wearing it all the time, perhaps embedded into a baseball cap, and with a finer range of thought patterns detected and connected directly to my mobile phone - allowing me to interact with the world just by thinking particular thoughts.  In doing this I could wonder what the traffic will be like on the way home and this information would pop up in front of me.

If you also think about smarter cities, if everyone is wearing the device and open to sharing their thoughts, city heat maps could be created to see how people are feeling to create a picture of the mental health of a city. Or musicians could create elaborate pieces based on what they are thinking about.

The applications are endless, we just have to build them.

Think this topic is the most-likely prediction, or maybe just the most innovative, among the Next 5 in 5? Vote for it by clicking "like" on IBM's smarter planet.   

IBM 5 in 5: Biometric data will be the key to personal security

Editor's note: This IBM 5 in 5 prediction about biometrics is by IBM Fellow and Speech CTO David Nahamoo.

Everything we do online, or via a computer, requires authenticating who we are – user IDs and passwords are our safeguard. But the security isn’t foolproof. Our IDs and passwords can be stolen and our mobile devices can be lost or stolen.

Over the next five years, your unique biological identity and biometric data – facial definitions, iris scans, voice files, even your DNA – will become the key to safeguarding your personal identity and information and replace the current user ID and password system.

It’s not all about what you know

We know that security improves by combining different biometrics with different methodologies. So, we typically use three ways to authenticate each other:

•    What you have: a badge or ID card
•    What you are: how you look, speak, walk
•    What you know: a secure piece of information or password

Think about what we have to do to authenticate our access for something online: create user IDs and passwords; set up hint questions and site keys for dozens of accounts. Personally, I have a very difficult time remembering more than 50 account log-ins and passwords that I have.

Smart device, smart security



We have been moving from devices like desktops and laptops to smart devices such as mobile phones and tablets – all property that is easily lost, stolen or misplaced. These devices are not yet outfitted with operating systems and security elements that are as strong as immobile devices of the past. Biometric security can strengthen those weaknesses.

Biometric data will allow you to walk up to an ATM and access your bank account by simply speaking your name and looking into the camera. Yes, we’ve all seen the thriller sci-fi movies where a person is forced by the villain to scan their eye or finger to unlock a door. But that’s fiction. In reality, ATM cameras using facial and iris recognition may be able to detect stress, pupil dilation, and changes in heart rate and breathing patterns to establish a confidence level that the user is not in danger.

We can take advantage of the advanced technology being used in the smart devices, such as microphones, touch screens and high definition cameras to fully employ biometric security options. While there is already some adoption of facial and voice recognition, combining these and other biometric data points in the near future can eliminate the hassle of memorizing, storing and securing account IDs and passwords and at the same time give users a greater security confidence.
Think this topic is the most-likely prediction, or maybe just the most innovative, among the Next 5 in 5? Vote for it by clicking "like" on IBM's smarter planet.  

IBM 5 in 5: Mobile is closing the Digital Divide

Editor's note: This post about IBM's Next 5 in 5 prediction about the future of mobile computing is by Paul Bloom, IBM's Chief Technology Officer for Telco Research.

Think about what you can already do with your mobile smartphone – check your bank account, tweet, watch television, and oh yeah, make a call. But all of this access still depends on where you are, and you have to initiate the communication.
Over the next five years, mobile devices will assist you in your daily life by initiating the communication with you and providing helpful information based on your context. For example, when you order your lunch from your cell phone, you might get a message recommending a healthier selection, based on the restaurant and your personal profile. 

Since your phone will also be your wallet, bank and record keeper, your cell phone will let you know what the impact of this lunch will be on your budget and may modify the recommendation based on predicted cash flow. This is only one example of how your mobile device will have access to the results of predictive analytics based on your location, context and personal information, aiding you in every facet of your life.

Freeing up today’s networks
 

As more people use mobile phones, the network will take on more workload. Today, wireless data networks are already overloaded because of what we’re sending over them – high definition videos, for example. To meet the demand of all these mobile centric services, we have to optimize and extend the network.

For example, when you download a video, the request goes to a server and all that data gets shipped from the server, down to your device.

How it could change: if the network knows that you, and your neighbors, are watching that HD video, it could be stored in another location (off the server), so that it’s closer to where the videos are being watched.
 

Or, peer-to-peer access – which harkens back to the early days of computing when companies and universities shared unused compute capability for a task – could turn our mobile devices into a point in the network. If you have bandwidth that you’re not using, someone else who needs additional bandwidth communicates with you to get that additional access.

Communicating at any time from any where

   


In five years we will see the massive introduction of machine-to-machine based services. So people won’t initiate communication for information; rather, systems will initiate communication and data to the mobile users. For example, your mobile will have access to your electronic healthcare records while also monitoring your vitals, such as blood pressure, in real time. Now, a system could notify and connect you to a doctor if your blood pressure is out of a normal range.

And paper currency will also become obsolete as transactions will go from mobile to mobile. As security issues and banks’ roles are worked out, we will be able to buy and sell goods, lend money to a friend, and more. Countries that don’t have to battle legacy telco infrastructure are leading the way. Kenya, for example, does not have a traditional banking infrastructure. So, you’re seeing telcos offering mobile banking to provide micro-transactions.

Industry regulations, security controls, and improved bandwidth and speed (in the case of countries with legacy infrastructure) will determine how quickly these capabilities and services become available.

Think about this: your mobile device knows where you’re going, where you’ve gone, what you’ve bought, where other people have gone and bought, and other data that could change the way people start thinking about their daily routines (commuting, shopping, investing, etc.). With this whole new set of data that we can apply predictive analytics to, I could predict how a behavior – say, eating fast food – would affect my current health.

Some industrial nations such as South Korea are fast approaching these capabilities. Others you may not suspect, such as Africa are poised to become a “have” in the mobile industry, too. Countries not keeping up (including a “have” like the U.S.) could mean not just weak signal strength in a rural area, or a slowly downloading video – it could prevent the penetration of entire services.

Think this topic is the most-likely prediction, or maybe just the most innovative, among the Next 5 in 5? Vote for it by clicking "like" on IBM's smarter planet.

IBM 5 in 5: Generating energy from unexpected sources

Editor's note: This post about IBM's Next 5 in 5 prediction about future energy sources is by IBM Distinguished Engineer Harry Kolar.

It happens all the time; you forget your cell phone charger at home, and your smartphone battery runs out after hours of email and Angry Birds. But what if you could recharge your cell phone using power you’ve generated simply by walking?

Anything that moves has the potential to create energy. In the next five years, advances in renewable energy technology could make it possible for us to draw on power generated by everything from our running shoes to the ocean’s waves.

Your body will become an energy-generating machine

Walking involves a variety of dynamic forces. The strike of your heel on the ground and the bend of your sole release a lot of dissipated energy.

These simple movements can become a power source, enough to charge your cell phone, with the help of small device with an antenna inserted into the sole of your shoe.

This science -- parasitic power collection -- pulls and transmits energy created by the slightest movement.

Think about the possibilities. A device on the spokes of your bicycle could measure and collect energy that’s then transmitted to power your kitchen appliances. The water running through your pipes could power on the lights in your house.

Now think bigger: what could you do if you could harness the energy of the ocean?

You can harness the power of the ocean


Wave energy and tidal energy are developing forms of clean energy that are virtually limitless. They’re clean, renewable, and will lessen the strain on our traditional power grids.

Wave energy and tidal energy are collected from the ocean in different ways. Most wave energy converters float on the surface of the water and use various designs to generate electricity. The tidal energy converters typically sit on the sea floor and are completely submerged. They look like large turbines or propellers that spin with the incoming and outgoing tides. Tidal energy is quite predictable due to the periodic nature of the tides, while wave energy requires more complicated modeling to predict characteristics over time. 

Before we can make use of these energies, however, we need to understand and minimize their environmental impact. For example, the devices that collect and convert wave and tidal energy generate noise underwater that can affect marine life.

My team is working with The Sustainable Energy Authority Ireland to use real-time streaming analytics that monitor the underwater noise and track its potential impact on the marine environment. That data will be shared across the wave energy industry to help build a clearer picture of how this type of technology can be safely, sustainably used and controlled.

Beyond the obvious benefits like cleaner power, using the ocean’s energy could have significant economic benefits.

High-wave energy conditions exist in many areas around the world and could have real value for coastal countries like Ireland, which has one of the largest concentrations of wave energy in the world, yet had to import about 86 percent of its energy (mostly fossil fuels) in 2010.

The economic ecosystem that will surround wave energy generation sites will involve many parties and bring considerable investment. Power companies will become involved to provide grid connections. Specialty disciplines in marine engineering will be involved in ways like care and maintenance for the sites.

Information Technology will be a key participant in this process too. The integration of leading-edge technologies like advanced analytics and smart grid components will help connect and manage new renewable resources and ensure operational efficiencies as well as consistent and predictable performance. 

IT will be needed for monitoring, analysis, simulation and modeling. IT will also help monitor and capture the economic performance of the technology and further support the application of these new renewable energy sources.

Making it easy to make smarter energy choices



Now more than ever, we’re starting to understand the need to conserve energy. With populations growing and electricity demand expected to grow at 2.2 percent per year to 2035 (according to the World Energy Outlook 2010), our current energy infrastructure is just not enough.

But our consumer decisions are motivated by factors like convenience, comfort, cost and the opportunity for digital connection. We need access to the right tools and information to make smarter energy consumption decisions, and those tools are getting closer to reality thanks to technology like parasitic power collection and wave and tidal energy.

Think this topic is the most-likely prediction, or maybe just the most innovative, among the Next 5 in 5? Vote for it by clicking "like" on IBM's smarter planet.

IBM 5 in 5: Big Data & sensemaking engines start feeling like best friends

Edior's note: This IBM 5 in 5 post about Big Data and analytics is by Jeff Jonas, IBM Distinguished Engineer and Software Group's Chief Scientist of Entity Analytics. He blogs here, www.jeffjonas.typepad.com and can be found on Twitter @jeffjonas.

Click through rates for unsolicited advertisements range from near zero to roughly five percent. From the recipients’ point of view, just about every such communication is more time wasting spam.

Imagine a future where some sources of unsolicited advertisement produce such useful and perfectly timed ads, that you would signup. A world where virtually ever text message or email pushed at you is so relevant that this “service” starts feeling like a best friend.

Here at IBM we are working on sensemaking technologies where the data finds the data, and relevance finds you. Drawing on data points you approve (your calendar, routes you drive, etc.), predictions from such technology will seem ingenious. 
 


Imagine this: You actually sign up for such an unsolicited advertisement service. Three days later, it has suggested nothing. Why? Because there wasn’t anything worth your attention.  But on day four, 10 minutes before you hop in the car to drive to a meeting over coffee, you get this text: “Don’t take the 405, take the 110, then exit 10b, and 3 blocks up there’s a Starbucks.”

You think: Well that is nuts; my coffee shop meeting with my buddy Kenny is at least 15 miles from there. So you text “?” as a reply.

The answer: “Big accident on the 405, will affect Kenny too; already cleared this with him, and this Starbucks is the proposed compromise when considering all the factors.”

Now you could text another “?” to see what these other factors are, but you know it’s pointless. As you pull out of your driveway you are thinking “I love you” as you think about this new best friend in your phone.

This new era of Big Data is going to materially change what is possible in terms of prediction. Much like the difference between a big pile of puzzle pieces versus, the pieces already in some state of assembly – the latter required to reveal pictures. This is information in context, and while some pieces may be missing; some may be duplicates; others have errors; and a few are professionally fabricated lies – nonetheless, what can be known only emerges as the pieces come together (data finding data).

Big Data in context is one of the most significant trends in the information technology field.




This type of technology is going to be real time. Today, smart insight being produced at the end of every week, after a customer left a web site or after a bank already approved a loan only leaves organizations wondering why the answers are so late. Sensemaking systems will deliver sub-200ms insight, fast enough to do something about a transaction, while the transaction is still happening (aka perfect timing).

We at IBM are well down this road towards massively scalable sensemaking analytics.  And whether you will benefit by ingenious advertising services (versus spam) or better health care outcomes, the future will bring higher quality predictions, faster.
 


Think this topic is the most-likely prediction, or maybe just the most innovative, among the Next 5 in 5? Vote for it by clicking "like" on IBM's smarter planet.  

12.12.2011

Dr. Lawrence Lippitt, author of "Preferred Futuring", Visits IBM Research - Zurich

5 questions with Dr. Larry Lippitt



Q. What is "Preferred Futuring"?


Larry Lippitt. Preferred Futuring is a way to engage everyone in the system to be able to communicate with each other. Communication is so important and so basic to operating as an organization effectively and, nowadays, it happens so very quickly, too.


Preferred Futuring helps people come together and discuss “How on earth did we get to where we are?” and “Where are we, actually?”


Once we’ve agreed on these issues, we can discuss “Do we have any values or beliefs that have participated in getting us there?” because our basic values and beliefs affect the way we behave.


And finally, “What are some of the trends and developments on the horizon?”. As we hope and plan to arrive at the future we want, we need to ask, “Which trends might impact us?” So we need to be smart in our strategic thinking.


Then, collectively, we all participate in determining “Where to we want to be?” Not, “Where should we be?” or “Where ought we to be?” but “Where do we want to be?” It’s about listening to the passion in my heart that says “This is exciting, and I want us to get there together.”


We need to talk with each other in order for us all to work together towards getting there. This is then followed by planning and implementation. That’s basically the process.


Q. The event today is about inspiration transformation. Can you explain?


LL: I have to smile at this because sometimes the process generates so much energy that you just have to get out of people’s way. Inspiration transformation comes from the sudden understanding that we’re all on the same page, or when you recognize what our whole system looks like and think, “Wow, I never knew.”


So the idea is to exprience this sensation together, suddenly, as part of an integral system, not just you over there and me over here. When we begin to see that and experience that, the energy grows. Then we can talk about some of the hard questions, such as “What are we doing right?” as well as “What are we doing wrong?” because everything we’re doing isn’t wrong, of course.


We begin to get a complete picture, a balanced view of where we really are, not just of all the problems we have. Then when we share the vision with each other, it’s like splitting the human energy atom! There’s so much motivation and the ability to innovate, to think outside of the box I was in just a little while ago.


The process gives us the inspiration to continue together, and it gives us the motivation from that energy.


Q. Based on your 30 years of experience, are there any tricks that you can share?


LL: You know, being an effective leader is really not that complicated. It seems that way because there’s so much to do and so much to take into account. But it is really about connecting with the other person. Not letting the fact that you’re the leader or I’m the leader and you are to be led, whatever—not letting that intervene with the fact that we are people and we need to work together to get a job done.


As a leader you need to know that we don’t get anything done unless we can work together as colleagues, as equals, although there may be an unequal status because someone is the designated leader. But it’s important to let that not get in the way.


The other thing is respect and courtesy. These are basic things we should have learned in kindergarten. Respect means that, well, maybe I’m not quite sure about your ideas, but because they’re your window on truth, it is important that I as the leader listen to you.


Because when you listen to that truth, suddenly things change. You have to try to understand the other person through their own eyes. Something magical happens when you’re able to work together and communicate with each other and this enhances the leader’s ability to have empathy.


Q. As we honor 100 years of IBM, any personal anecdotes?


LL: (Laughs) Well, this goes way back in history—because I also go way back. There was the stereotype that, at IBM, everyone was all buttoned-down, we all wore dark suits and ties. One time I was at a facility outside of Boulder, Colorado. I was coming to work one morning and had this little pop! of inspiration about leadership. I was one of these buttoned-down guys, and just had this surge of inspiration.


I don’t think many people at IBM wear dark suits anymore, certainly not in the techie part of the organization—so that’s one memory that makes me laugh.


Q. Any advice for future leaders?


LL: You know, it may not seem a fashionable approach right now because leaders are supposed to direct others and so on. But I think it’s essential to learn the skills that help people do their job, to listen and do those very human things that support the people you work with to do their job better. To develop the skills that let you be helpful and facilitate the processes relating to the work.


Many thanks, Dr. Lippitt, for these very interesting insights.


LL: It was my pleasure, thank you.

12.06.2011

The Future of Healthcare

The introduction of so much new digital medical information is transforming the decision-making process in the healthcare ecosystem. Patients often seek out information before they speak to a doctor, and clinicians are using computers to help with diagnosis or with the selection of treatment options. In short, what used to be an intimate doctor-patient twosome has now become a threesome: the doctor-patient-computer triangle. But do all three entities in this new relationship have an equal say in what decisions are made?

IBM Research – Haifa hosted a healthcare colloquium in honor of IBM's centennial, convening thought leaders from the healthcare community to discuss this new transformation and its implications for the future.

Perspective on the past and vision for the future

IBM researchers in Israel pioneered some of the first IBM projects in the areas of information-based medicine, standards for medical data, interoperability for medical imaging, and clinical genomic analytics. Today, the lab specializes in research related to the integration of vast amounts of medical information and gaining insight from this data by enabling access and sharing.

The researchers have developed an online system that provides clinicians with a prediction of which drug or drug cocktail will be the best for treating a given patient infected with a specific HIV virus. In another project, they created a secure web-based system that allows public health institutions and centers for disease control to electronically share public health data—even across geographical and political borders in the Middle East.

Another project reinvents the patient portal, enabling patients to integrate and manage their healthcare data for all medical needs, receive personalized recommendations or alerts for safer medical treatment, and immediately access data from a vast range of sources.

Yet even with the support of the most sophisticated technology, noted Prof. Jonathan Halevy, of the Shaaeri Zedek Medical Center, the foundation of the doctor-patient relationship remains unchanged. "There is still no substitute for face-to-face encounters between physicians and their patients. The doctor will never be optional."

Watch all of the event's presentations


Keynote Address
Dan Pelino, General Manager of Healthcare and Life Sciences, IBM



The Doctor-Patient Relationship in the Internet Era
Prof. Jonathan Halevy, Director General of Shaare Zedek Medical Center



The Digital Transformation of American Radiology
Dieter Enzmann MD, Chair of UCLA Radiology



Panel: The New Doctor-Patient-Computer Triangle
Participants: Dan Pelino, Shmuel Reis, Kobi Vortman PhD, and Prof. Eddy Karnieli; Moderated by Yardena Peres



Advanced Decision Support Technology – IBM Watson for Healthcare
Dafna Sheinwald PhD, IBM Research – Haifa



From Biological Discovery and Personalized Medicine: The Role of Computation in Human Genetics
Itsik Pe’er PhD, Columbia University



Clinical-Genomics Decision Support at the Point of Care
Boaz Carmeli, IBM Research - Haifa



Closing Remarks
Aya Soffer, Director of Information Management Analytics, IBM Research – Haifa



Poster Session

12.05.2011

How to build computers of the future

Researchers at IBM are building computing devices of the future - but you're less likely to find them focusing on the slimmer, smaller, lighter, sleeker, sexier holiday gift-giving gift ideas. IBM's top computer scientists, physicists and chemists can instead be found improving compute power based on advanced physics discoveries; decreasing the size of transistors while improving performance; and even developing circuit architecture that'll give you better cell phone reception and protect devices against radiation.

And IBM's $6B annual investment in R&D doesn't just mean playtime in the lab. Today, 10 of IBM's research papers were recognized at the IEEE International Electron Devices Meeting. The conference, in its 60th year, is hosted by the world’s largest professional association dedicated to advancing technological innovation and excellence for the benefit of humanity.

Here's a look at what IBMers are contributing to the future of computing:

Racetrack Memory
Photo Caption: IBM Research - Almaden's Stuart Parkin is a pioneer in racetrack memory. Lightning-fast boot times anyone?
  • Combines the benefits of magnetic hard drives and solid-state memory to outsmart Moore's Law (increased power demands, shrinking devices)
  • For the first time, IBM researchers are marrying Racetrack memory with CMOS technology (on which virtually all electronic equipment is built)
  • Improves density, potentially allowing massive amounts of info to be accessed in less than 1 billionth of a second

Graphene
Photo Caption: Earlier this summer, IBM Research unveiled its first wafer-scale graphene integrated circuit smaller than a pinhead
  • The first ever CMOS-compatible graphene device can advance wireless communications and enable new high-frequency devices operable under adverse temperature and radiation conditions
  • Resesarchers have developed a new technique to improve the structure of graphene transistors, proving stability at far higher temperatures than previously achieved

Carbon Nanotubes
Photo Caption: Carbon nanotubes have been used to develop improved solar cell technology, and IBMers have discovered excellent off-state behavior in extremely scaled devices - an energy-saving technique
  • IBM researchers demonstrated the first carbon nanotube transistor rated at less than 10 nanometers, which easily outperforms the best competing silicon-based devices
  • Nanotechnology discoveries like this point to improved solar cell technology

To read the technical details of IBM's three breakthroughs, check out the IBM press release. The Wall Street Journal's Don Clark interviewed IBM Fellow and VP of Innovation Bernie Meyerson in this story.

12.01.2011

Dave Ferrucci at Computer History Museum: How it all began and what's next

CHM President John Hollar
In front of what Computer History Museum president John Hollar called "the largest crowd for a Revolutionaries lecture" that he's ever seen, IBM Watson principle investigator Dave Ferrucci sat with Financial Times' Richard Waters on November 15th for a conversation about "A Computer Called Watson." To the audience of about 450 Silicon Valley techies, influencers, teenagers and inspired engineers, Dave kicked off the conversation by explaining how Watson came about; and it began with the notion of natural language processing, namely, contextual aspects of language.


"At our house, I'd always call the kids down to see something 'interesting' that I'd done - some type of experiment or science-related thing," Dave said. "After enough of these demonstrations, my daughter started to associate the word 'interesting' with 'boring' - so there's a little about language context."

IBM Watson principle investigator David Ferrucci (left)
with Financial Times' Richard Waters
It turns out Dave was headed toward a career in medicine and was pursuing an M.D. rather than a Ph.D. But the biology major quickly developed a fascination with artificial intelligence, and a passion for programming. "I thought it was incredible that you could tell the computer what to do - and that it would do it," he said.

After obtaining his BS in biology from Manhattan College, he pursued computer science with an emphasis in knowledge representation and reasoning at Renesslaer Polytechnic Institute, completing his Ph.D. in 1994.

Since joining IBM in 1995, Dave has contributed largely to the Research function as a computer scientist. But in 2007, when IBM executive Charles Lickel challenged Dave and his team to revolutionize Deep QA and put an IBM computer against Jeopardy!'s human champions, he was off to the races.

"I had to get funding," Dave explains. "I told the executives I could do this in 3-5 years. I kind of just guessed."

The executives bought it, and Dave had a huge task ahead. By assembling a team of eventually 28 researchers in the areas of natural language processing, software architecture, information retrieval, machine learning and knowledge representation and reasoning, Dave created Watson - a computer system, that, using a combination of sophisticated hardware and software, could understand natural language and deliver a single, precise answer with confidence and evidence for its decision.

At the end of the conversation, Dave told the crowd about Watson's new job in the medical field: "We want Watson to enable better judgement by humans in decision-making, whether it be in medicine, law, finance or services," Dave said. "While the human is the ultimate decision-maker, Watson will provide evidence and confidence by scouring millions of sources of related information in a short amount of time."

GigaOm's Stacey Higginbotham takes on IBM's Watson
and Sierra Ventures' Robert Walker in an exhibition match

In an exhibition Jeopardy! game following the talk, IBM's Eric Brown was the ultimate Alex Trebek, hosting players GigaOm's Stacey Higginbotham, Sierra Ventures' Robert Walker, and "oh yeah, our third contestant, Watson, from Yorktown Heights, New York, built by a couple of computer scientists," an introduction met with laughter that would continue throughout the game.

The humans playfully 'teamed' up against the computer,
high-fiving and fist bumping on each correct answer
The animated human contestants instantly won over the crowd after trailing Watson through the first part of the game. In fact, when Stacey buzzed in with the first correct question for the humans, the crowd went wild.

The night continued in that way, and the human contestants even found themselves getting answers from the crowd, to which host Eric Brown responded: "Watson can't hear you, so humans have an advantage!" As it turned out, the trick was buzzing in before Watson - hard to do unless you're a seasoned Jeopardy! vet like Ken Jennings or Brad Rutter.

As the exciting match wound down, and all three contestants answered the Final Jeopardy! question correctly, Watson came away with the win, but left the auditorium with tremendous enthusiasm for this computer and its impact on the future of technology.

You can watch the entire presentation on YouTube:



More from the event:

Pre-game (post-practice game) thoughts from contestants*:




*apologies for the incorrect orientation


This post originally appeared at ibmresearchalmaden.blogspot.com on Friday, November 18, 2011.