free counters

Monday, February 27, 2012

Smart Ways To Deal With Dumb Clinical Alerts

Paul Cerrato
Editor, InformationWeek Healthcare

Smart Ways To Deal With Dumb Clinical Alerts
Clinical alerts may help reduce dangerous drug interactions, but sometimes they cause more problems than they solve. Experts describe best practices to make them work better.

Clinicians, whether young or old, technophobes or technophiles, continue to complain about the avalanche of unnecessary alerts they get when walking through a clinical decision support or e-prescribing system. It's not uncommon to hear a cardiologist, for instance, complain: "I've been practicing for 15 years. I don't need to be cautioned about ordering aspirin for a patient at risk of hemorrhagic stroke."

On the flip side, IT leaders and clinicians worry that these systems miss needed alerts because they're incapable of taking into account important free text data from clinicians' notes. Allison McCoy and her colleagues at the Department of Biomedical Informatics, Vanderbilt University School of Medicine, give a good example in a recent Journal of the American Medical Informatics Association (JAMIA) report.. They give the example of an infectious disease specialist who orders a powerful antibiotic for a life-threatening infection in a patient with compromised kidney function, and who inserts that fact into the clinician's notes.

In a situation like this one, it's likely that clinicians would by default get a warning to discontinue such a drug because it can be toxic to the kidneys. But instead, the clinical decision support system should alert doctors to monitor renal function closely. The risk of dying from the infection is far greater than the risk of renal damage from the antibiotic. But most CDSSs aren't that smart, yet.
It's also important to have an alert system that puts more emphasis on evidence-based rules and algorithms than it does on expert medical opinion. As MCoy and her colleagues explain, some alerts are just so well documented and too important to ignore. Their JAMIA report points out, for instance, that "an alert to discontinue ongoing oral potassium supplements and to monitor serum potassium levels for a patient whose potassium level had gradually increased to 6.0 mEq/l should rarely be overridden."

Of course, any alert system tweaking is going to involve tradeoffs between false positives and false negatives. If an alert is too sensitive, it may warn caregivers about threats to patient safety that are very unlikely to occur in the real world--a false positive. On the other hand, if the system only warns clinicians about the most severe, life-threatening dangers, they're going to have a lot of living but very sick patients on their hands as a result.

Similarly, every system needs to strike a balance between sensitivity and specificity. Typically, the more sensitive the system is to detecting dangers that are real, considered true positives, the less specific it is--that is, the less capable it is of ruling out those dangers that aren't, also called true negatives.

What Do The Experts Say?
At our recent InformationWeek Healthcare IT Leadership Forum, Johns Hopkins Hospital's CIO and CMIO talked about how their organization is solving the alert fatigue problem.

CIO Stephanie Reel said the hospital's alert technologies, tools, and protocols "can identify the fact that someone has seen an alert before for a specific patient and doesn't need to see it again." The hospital also tailors certain alerts to different categories of clinicians with different skill sets. "An experienced cardiologist doesn't need to be alerted about an interaction between Coumadin and aspirin," Reel said, because it's assumed that a practitioner with this level of experience already knows that fact.

Another way Johns Hopkins deals with too many alerts is to devise treatment protocols that reduce the need for alerts in the first place. The hospital does this by creating "smart order sets." Peter Greene, MD, Johns Hopkins' CMIO, used the example of ordering postoperative insulin for a patient. There are now several hundred different options on how to order a diabetic regimen in such a situation.
So instead of having the physician walk through countless screens and alerts, a lot of the programming is done behind the scenes, inserting several key pieces of information about the patient that are already in his e-chart to help find the right insulin regimen. So in the end the ordering physician needs only to answer a few questions before the CDSS makes a very specific recommendation.

At University of Pittsburgh Medical Center, the CDS and e-prescribing systems are also getting smarter. Daniel Martich, MD, the center's CMIO, explained in an email that UPMC also directs specific alerts to specific provider groups. In certain inpatient situations, alerts about ordering or canceling telemetry will go only to the attending physician and his or her team, not to consultants.
Similarly, Martich explained that "we fire certain specific medication interactions on the inpatient side only for the prescribing pharmacist, and not the ordering physician in CPOE."

Other specific, evidence-based alerts that UPMC pushes out include one about the risks of anticoagulants (sent to clinicians only if these drugs are ordered for a patient with an epidural catheter in place), and one that alerts the clinician about the need to adjust the dosage when placing NPO (nothing by mouth) orders for a patient receiving insulin or oral anti-hyperglycemic agents.

Of course, as your IT department thinks about ways to make its alert system smarter, you have to maintain reasonable expectations. Not every healthcare provider has the deep pockets of a Johns Hopkins or UPMC. Customizing these programs can get expensive, requiring clinical experts to review the algorithms and IT staff to rewrite the code.

But as your organization balances costs and benefits, keep in mind that smarter decision support systems will help clinicians concentrate on the most important issues and reduce expensive medical errors. It's not too hard to believe that more dollars spent on such IT initiatives will translate into fewer dollars going out the door in malpractice settlements.

Find out how health IT leaders are dealing with the industry's pain points, from allowing unfettered patient data access to sharing electronic records. Also in the new, all-digital issue of InformationWeek Healthcare: There needs to be better e-communication between technologists and clinicians. Download the issue now. (Free registration required.)

5 Strategies To Fail Fast On IT Projects

Failing fast can can save your organization millions of dollars, but it isn't easy to do.
Even if our resumes and LinkedIn profiles portray us as superhuman, we all fail. And nowhere are those failures more apparent than on some of the big IT projects we lead and manage.

The important question is: How quickly can we fail at something before we make a big investment in it? Failing fast, one of the themes of InformationWeek's upcoming report on IT project management, is a practice that can save an organization millions of dollars.

I've worked at both companies and government enterprises, and most of them have something in common: lots of rules, gatekeepers, and controls--longhand for bureaucracy. The checks are all well intentioned, but the combined effect is the same: slow, lumbering projects.
At the same time, enterprise IT projects start to acquire a certain credibility the longer they're allowed to survive. Projects start to attach themselves to the careers of certain managers, and there's a reciprocating effect: The projects take on a heightened importance among line-level staff ("Oh, what will Ms. X think if I don't do well on this project?"). The project's sponsors damn the torpedoes and plow ahead, because the project simply must work. "My name's on it!"

Projects also acquire a life of their own. You know how data center technicians will insist that, this time, one more reboot, one more fix, will make the upgrade work rather than concede that the project is turning into a nightmare? That's how some project managers get. Instead of raising a flag to warn that a project (a) doesn't have enough resources, (b) was ill conceived, or (c) is subject to new information that makes it a bad idea, project managers have been trained to push the puck forward, forward, forward.
The wheels of bureaucracy will eventually grind to the realization that the project is indeed a bad one--but not until the big investments have been made.

Alex Adamopoulos, CEO of Emergn, a project portfolio management consultancy whose customers include British Airways and BT, says he has seen millions of dollars spent on projects before organizations muster the will to declare them a failure. Organizations are wasting more than money. Hundreds, sometimes thousands of employees disrupt their work processes at a certain point in a big project's lifecycle.
So if a project is doomed, organizations usually treat as a hero the person who pulled the kill switch early on, right? Well, it's complicated.

Greasing The Wheels Of Bureaucracy
As we researched our report, we discovered five key strategies that will help your organization fail fast.

1. Make Your Project Management Offices Business-Focused. "Don't be afraid to call out the business," says Matt Anderson, director of Cerner's PMO. But PMOs must first build up some business cred. Those project managers who start with the premise of "Let's do what we need to do to make the business successful" instead of taking on the typical enforcement and gatekeeper role are far more likely to get management's ear and access to the kill switch.

2. Define Criteria For Failure. These criteria start with a project's sponsoring executive. Execs don't need to tell their PMs up front that it's OK to fail, because often, especially early on, it's not OK to fail. But they do need to let their PMs know that at some point they may need to flag the project for termination, and the sponsoring execs must lay out the criteria. We spend so much time defining project success, but we spend little to no time defining project failure.

3. Assign Programs, Not Projects If a project fails, that doesn't mean the program is failing, so there's not that same "Oh, Lord, my job has just gone away" feeling. Most of us run many, many projects at once. Don't let PMs create allegiance to projects, just to the program.

4. Adopt Small-Organization Startup Processes Identify processes suited to quick execution. Eric Ries's book The Lean Startup is an excellent starting point.

5. Don't Declare "Done"Your organization will need to make some changes to create a fail fast environment. A frequent problem is to declare a failed project "done," stop the meetings, yank the funding, and move on. Don't. Keep the conversation going. Declaring "done" means that everyone can go back to doing things the way they used to. And that means continuing to let misguided projects linger and run up costs. Learn from your mistakes and continuously share that knowledge.

Jonathan Feldman is a contributing editor for InformationWeek and director of IT services for a rapidly growing city in North Carolina. Write to him at or at @_jfeldman.

E-Prescribing: Not Quite Ready For Prime Time?

Paul Cerrato
Editor, InformationWeek Healthcare

E-Prescribing: Not Quite Ready For Prime Time?
A recent study suggests that electronic systems cause just as many errors as paper-based ones, but deeper analysis doesn't support that conclusion.
David Pogue, the technology editor for The New York Times, likes to talk about consumers' pain points, including the hassle of setting up a home network and e-mailing a video to a friend. Healthcare IT executives have their own long list of pain points, and the latest study published online in the Journal of the Medical Informatics Association (JAMIA) suggests we may have to add e-prescribing tools to the list.
When investigators from Harvard and Massachusetts General Hospital reviewed more than 3,800 computer-generated prescriptions for outpatients, they found that 11.7% contained errors, and 35% of those mistakes had the potential to cause real harm--what they refer to as adverse drug reactions (ADRs). Although the research didn't directly compare the electronic errors to those occurring in a handwritten system, the report said: "Our results in terms of error frequency with electronic prescriptions are consistent with outpatient handwritten and electronic error rates that have been reported in the literature."

That statement provides a clue to the investigation's most serious shortcoming. The researchers didn't do a direct case-by-case comparison of electronic and handwritten prescriptions. It's not enough to say that the error rates for handwritten scripts--as "reported in the literature"--were about the same. There's no way to know if the circumstances in these older studies match those in the new Harvard investigation.
If, for instance, the types of drugs ordered or the complexity of the dosing regimens differed in studies reported in the literature, when compared to the Harvard report, an accurate comparison would be impossible.
And if, after reading the latest JAMIA report, you're inclined to re-evaluate e-prescribing tools, keep in mind that its data has to be looked at in the context of all the research that's out there. A review of 25 studies concluded recently that e-prescribing systems do, in fact, reduce the number of medication errors, including those likely to produce an adverse reaction in patients.
Lessons Learned
Despite the new study's weaknesses, there's still a good deal that IT managers can take away from it. The researchers looked at more than 12 e-prescribing products, and the error rates varied widely among them, from about 5% to almost 38%. The take-home message is clear: Choose your vendor wisely. (Unfortunately, the products weren't mentioned by name in the report.)
Of course, we can't rule out the possibility that this wide range in error rates was due in part to user error, which only emphasizes the importance of making certain clinicians are thoroughly trained on these systems.
The Harvard team outlines several steps to reduce the likelihood of e-prescribing errors.
Forcing functions. One of the most common medication errors that occurs in electronic systems is data omission. If your system forces clinicians to fill in the name of the drug, dosage, and specific indication, and it doesn't allow misleading abbreviations, you're less likely to find such mistakes. In the Harvard study, it was estimated that almost 72% of the medication errors and 63% of the potential ADRs would have been eliminated with forced functions.
Specific decision support, including a feature that automatically checks maximum dosing for various patient populations, would also lower the error rates.
Calculators can help eliminate dispensing errors by calculating the correct dose and duration of treatment based on instructions from the physician, rather than relying on a second person to input the same data elsewhere in the system.
In the end, it's the "preponderance of evidence" that wins the day. A mature e-prescribing system with the right functions, coupled with adequate clinician training, will generate fewer errors than paper and ink.

In the new, all-digital InformationWeek Healthcare: iPads are leading a new wave of devices into the exam room. Are security, tech support, and infection control up to the task? Download it now. (Free registration required.)

Physicians Say IT Is Still The Enemy

Paul Cerrato
Editor, InformationWeek Healthcare

Physicians Say IT Is Still The Enemy
Understanding the difference between how academic and community physicians think can foster more productive relationships--and perhaps less tension.
SaaS and E-Discovery Dangers
I doubt many CIOs consider the physicians they work with enemies. But I venture to guess that some physicians do consider IT as the enemy.

Some of the animosity stems from statements like this, made by Jason Burke, of the SAS Institute, a business analytics company: "Evidence-based medicine, personal electronic health records [are causing] a transformative shift toward more information-based decision making related to patient care. …"
While most academic physicians will agree with this stance, many community doctors see evidence-based medicine (EBM), clinical practice guidelines, and decision-support systems as "cookbook medicine."
They simply don't believe EBM will have the transformative effect on patient care that Burke suggests. In their minds, medicine is as much art as science, and as such can't be distilled into a series of evidence-based guidelines and rules.
Many clinicians also question the philosophical assumptions upon which EBM is based. Understanding this skepticism is the first step toward getting buy-in from physicians who resist e-health records and clinical decision-support systems implementation.
One source of skepticism is that EBM-generated practice guidelines are usually based on the results of large clinical trials. One problem with these trials is their exclusion criteria. A trial evaluating a drug for hypertension, for example, often includes patients that have only hypertension. Such patient populations have to be free of any other chronic disorders that might skew the results.
Such exclusion criteria help investigators get pure data, but this doesn't mimic the real world, where doctors often treat patients suffering from a variety of these "co-morbidities."
Another source of skepticism is something called a Type 2 statistical error. Community physicians place a good deal of faith in their own clinical experience. When they see a patient respond to some treatment that doesn't have the blessing of the experts, many are inclined to believe their own eyes.
What they might be seeing in a clinical trial is a Type 2 error, which occurs when a study enrolls too few patients and the data analysis jumps to the conclusion that treatment X doesn't help disease Y. To detect relatively small but statistically significant effects of any treatment, one's sample size--namely the number of patients enrolled in the trial--has to be large enough. If not, you get false-negative results.
A 2004 review of the research shows that more than 300 studies have come to false-negative conclusions because their patient size was too small. Reason enough to question EBM.
It's unlikely the chasm between clinical experience and clinical experiment will be resolved anytime soon, but knowing it exists can help IT execs be more sensitive to the resistance they meet, and hopefully help them devise a more effective strategy to get buy-in despite these reservations. Coming up with such a strategy just might turn "enemies" into allies.

Paul Cerrato is editor of InformationWeek Healthcare. Write to him at Join our Mobile Application Smackdown and nominate your favorite health and telehealth apps.

Health IT Leaders Launch Info-Sharing Website

Health IT Leaders Launch Info-Sharing Website
Founded by clinicians, site called Doctors Helping Doctors Transform Health Care encourages the medical community to share its EHR successes, complaints.

A group of physician health IT leaders has launched a nonprofit website for doctors that's designed to promote the transformation of healthcare through the use of information technology. Although not directly aligned with the federal government's Meaningful Use program, the website, Doctors Helping Doctors Transform Health Care also could help physicians achieve Meaningful Use by aiding them in implementing electronic health records.
Founding board members of the venture include William Bria, MD, chief medical information officer, the Shriners Hospitals for Children, and president, Association of Medical Directors of Information Systems (AMDIS); Peter Basch, MD, an internist with Washington Primary Care Physicians, and medical director, ambulatory EHR and health IT policy, Medstar Health; and Michael Zaroukian, MD, PhD, chief medical information officer and professor of medicine, Michigan State University, and medical director, clinical informatics and care transformation, Sparrow Health System. Janet Marchibroda, chair of the Health IT Initiative at the Bipartisan Policy Center, will serve as executive director.
Funding is coming from the Chan Soon-Shiang Family Foundation, the Optum Institute for Sustainable Health, and Siemens Healthcare. Medical organizations collaborating with the venture include AMDIS, the American College of Cardiology, the American College of Physicians, the American Osteopathic Association, and the American Society of Clinical Oncology. The American Association of Family Physicians and the AMA will play advisory roles, according to a press release.
The site's main goal, said Bria in an interview with InformationWeek Healthcare, is to start a grassroots physician-to-physician "adoption of information technology in the service of patient care."
Noting that physicians have always learned from each other to improve their practice of medicine, he said that collegial aid and support have been insufficiently emphasized in the health IT field.
Bria said he hoped physicians would use the site to share inspirational stories, talk about how health IT affects front-line medicine, "and help one another make this transformation effectively."
Although additional resources are available from organizations such as AMDIS and HIMSS, Bria said, a goal of the site is to get doctors to trade ideas about their problems and solutions in implementing EHRs and using the exchange to improve care--a process that should provide valuable feedback to vendors.
In the process, the site might help some clinicians overcome their resistance to Meaningful Use criteria. But Bria pointed out that the government program is related to financial incentives and "is not the whole transformation. If you make this transition properly and you end up realizing the benefits of having these tools on the front line of patient care, that's vastly more important than getting every dollar that's been appropriated for Meaningful Use."
Similarly, Basch observed that much of the current activity motivated by the Meaningful Use program is "a compliance exercise" to get the government incentives and avoid the penalties. "We want to bring back the higher mission of why this is being done," he told InformationWeek Healthcare. "We want to remind people that the higher purpose of this is to make care better, more affordable and more accessible. It's not about compliance."
At the same time, Basch said, he and his colleagues hope that the discussions on the community site will be "practical and focused. We hope to get veterans talking about their experiences and newcomers talking about challenges. Sometimes the most thoughtful comments come from people who are new to health IT and aren't as forgiving of workarounds as some of us old-timers are."
Besides EHR implementation, he added, the site will address issues such as the challenges of using health IT to improve care coordination and the reasons why so many doctors still are not doing data entry in their EHRs.
"We're not looking for rants. We're looking for people who can point out realistically and constructively to their colleagues the pathways that seem to make sense and those that are troublesome. 'Here's what I've tried that might help.'"

As healthcare providers of all shapes and sizes start implementing electronic medical records systems, security must be a top priority. Here's what you need to be thinking about to ensure your system is locked down. Download the report here (registration required).

Do Health IT Hires Need A Clinical Background?

Paul Cerrato
Editor, InformationWeek Healthcare 

Do Health IT Hires Need A Clinical Background?
The debate on which qualifications an IT job candidate needs to work in a hospital or medical practice rages.
If you've kept up with the news in recent months, you're aware of the shortage of qualified IT professionals to fill positions in hospitals and medical practices. The U.S. Bureau of Labor Statistics predicts that jobs in health informatics will jump by 18% by 2016 and expects there will be shortage of about 50,000 health IT workers over the next five years.
Few people challenge those statistics, but what's upsetting job candidates is that many health IT managers only want people with a clinical background.

Essentially, the debate revolves around this issue: Is it easier to teach an IT generalist the clinical principles needed to work in a hospital or practice, or teach a clinician the general IT principles?

Juliet Daniel, MD, senior director of medical informatics for Community Health Systems, which is responsible for more than 130 hospitals in 29 states, thinks the latter. During a phone interview, Daniel said it's important for someone working in health IT to "understand what it's like to use an EHR" from an end user's perspective. "Healthcare and clinical workflow are just so important, and if you're an IT person and don't understand it, it's hard for you to be influential."
At the managerial level, a clinical background certainly has advantages, especially if you're in a liaison position, as Daniel is. She spends part of her time translating the IT department's capabilities and limitations to clinicians who want to tweak the IT tools so they improve patient care. A comparable position at a company in another industry might be business analyst.
But Daniel thinks the preference for clinical training should even extend, for example, to the IT staffers who set up a clinical database. Building electronic order sets for a CPOE or modifying an order to fit the hospital's drug formulary is best handled by someone who understands clinical workflow, in her opinion.
I'm sure many experienced generalists would question that point of view and, in fact, I recently spoke with the CIO of a major health system who takes just such a contrary view.
During a phone conversation with Larry Stofko, until recently the CIO of St Joseph's Health System in Southern California and now executive VP for its Innovation Institute, he explained the partnership that existed between himself and his clinical counterpart, CMIO Dr. Clyde Wesp. Although Stofko and his managers were responsible for IT systems and Wesp for the clinical application of that technology, several of their managers have moved back and forth between the two groups.
Someone on the generic IT side who had an affinity for physician relationships, for instance, might shift to the CMIO's group to work on IT projects directly related to patient care. A clinician doing clinical data quality and analysis might transfer to the IT organization to run a data warehouse. One caveat Stofko was quick to mention about hiring someone with no medical background: St. Joseph's runs an informal "week in the life of a clinician" program to give IT pros a better understanding of patient care.
So is it easier to teach an IT generalist the clinical principles needed to work in a hospital or practice setting, or the other way around? St Joseph's has proved you can move people in either direction, regardless of their background. The bottom line: If a job candidate has drive, a high IQ--and an affinity for healthcare--there are almost no limits on what he or she can accomplish.

Healthcare providers must collect all sorts of performance data to meet emerging standards. The new Pay For Performance issue of InformationWeek Healthcare delves into the huge task ahead. Also in this issue: Why personal health records have flopped. (Free registration required.)

Electronic Health Record Security Concerns Are Global

Electronic Health Record Security Concerns Are Global

POSTED BY: Robert Charette  /  Mon, September 26, 2011
As I mentioned in a recent post, nearly half of Australians may end up boycotting the new voluntary electronic health record (EHR) system when it launches next year because they believe the government can't provide guarantees that their private medical details will remain private. A new Harris survey sponsored by the identity management company Sailpoint highlights EHR privacy concerns not only in Australia, but also in the United Kingdom and the United States.
According to the survey findings, some 83 percent of Australians, 81 percent of Britons, and 80 percent of Americans express some level of concern about moving their personal medical information to an electronic form.
When they were asked about a health care organization managing their personal information electronically, the survey respondents indicated that they are most concerned about:
  • Their identities being stolen—37 percent of Australians, 33 percent of Britons, and 35 percent of Americans
  • Personal info exposed on the Internet—30 percent of Australians, 26 percent of Britons, and 29 percent of Americans
  • Personal information being viewed by persons not directly related to the patient's care—11 percent of Australians, 15 percent of Britons, and 10 percent of Americans
The responses seem to be in close alignment across all three countries, even though health privacy regulations differ in each country. The lack of faith in IT security vis-à-vis health care seems to be a universal phenomenon, probably with understandable reasons.
For example, since September 2009, at least 9.8 million instances of improper disclosure of medical information have been recorded in the United States. Earlier this month, the renowned Stanford Hospital & Clinics in California added to the total when it announced that the electronic health records of 20 000 of its emergency room patients seen between March 1st and August 31st of 2009, including their names, diagnostic codes, medical record numbers, hospital account numbers, billing charges, and emergency room admission and discharge dates, had been posted for nearly a year (Sept. 9, 2010, to Aug. 23, 2011) on a commercial Web site called Student of Fortune.
The San Jose Mercury News reported that Student of Fortune solicits bids to answer homework questions. The patient information showed up as a spreadsheet attached to a file, and was traced to a vendor that worked with for the hospital. All work with the vendor has been suspended pending an investigation.
According to the newspaper, a stolen health record is now worth US $50 on the information black market, whereas a Social Security number is worth about a $1 (a credit card number fetches $1 to $2).
Then, in the UK, the National Health Service Eastern and Coastal Kent Primary Care Trust apologized for leaving a CD containing the records of 1.6 million patients in a file cabinet that was later sent off to a landfill for disposal during an office move in March of this year. The records contained the patients' addresses, dates of birth, NHS numbers, and GP practice codes.
The Trust tried to play down the incident by saying the information was from 2002 and probably was not retrievable. The Trust also stated:
"It is important to stress that information systems now are far more secure than they were at the time these files were produced—we no longer store information on floppy disks or CDs and use sophisticated systems of encryption."
Which is true but also somewhat irrelevant because, as reported in a story at, the Trust also admitted that it needed to retrain its personnel in current data security policies, which were not followed, leading to the incident in the first place.
This is not surprising. In a study released last week by the consulting company PWC and its Health Research Institute, only 58 percent of health care providers and 41 percent of health insurers were found to have trained employees on privacy measures related to the use of electronic health records.
With findings like that, it makes you wonder why 20 percent of folks in the Australia, Britain, and the United States still express no concern about EHRs and IT security.

Roel Castelein (Global) - Big Toilet is Watching You

Roel Castelein (Global) - Big Toilet is Watching You
Posted by Company EMC 02/24/2012
Roel Castelein (Global) - Big Toilet is Watching You

You just went to the toilet and that triggered an automatic email informing you that your health insurance went up. Next you ask and receive a personalized doctor's prescription, on your iPhone, to remedy whatever ailment is bothering you. Then you plug an appropriate device in your iPhone and monitor or evaluate your recovery. All historic data on your health is accessible by your doctor, your health insurer and most relevant, to you.
So just how far off is this future vision? Toto, short for Tōyō Tōki (Oriental Ceramics), is a Japanese toilet manufacturer that designed the Intelligence Toilet II. This toilet analyzes our excreta and records data like weight, BMI, blood pressure, and blood sugar levels. It even has a sample catcher in the bowl to obtain urine samples, which for instance can used to predict pregnancies. This information is then sent to your PC, showing your vital health stats.
It would be a simple effort to share this information with your doctor and even with your health insurer. Doctor's could then evaluate online how you are doing, and track if you follow their advice or prescriptions. The diagnosis and subsequent recovery data is valuable for responsible patients, but the essence lies in health insurers capturing and tracking this data, as they do with car drivers. And just as prudent drivers are rewarded with lower car insurance, healthy people could pay less health coverage. And when you wish to avoid paying a higher premium, you could ask 'Dr. App' on our iPad to help out.
MedAfrica, available for smart phones is the product of Shimba Technologies, based in Kenya. The platform aggregates information from many sources. It supplies first-aid recommendations from local hospitals, lists of doctors and dentists and data feeds from the national Ministry of Health for information on things like disease outbreaks or counterfeit drugs. At the same time Safaricom launched Call-a-Doc, to allow Safaricom's 18 million subscribers to call doctors for expert advice for two cents a minute. These are two innovative examples of Africa leap frogging the developed world. In a future world where an expanding senior generation has growing medical needs, there might not be enough doctors, so the African 'online doctors' idea could be refined to use in the developed world.
And what about the medical equipment to measure, diagnose and keep tabs on your health? Withings' blood pressure cuff just received FDA clearance. Plug it into your iPhone and it measures your blood pressure. The blood pressure data can then be shared online with your doctor. FotoFinder Systems GmbH, a maker of digital imaging products, recently released a device that docks with an iPhone to turn it into a handheld dermatoscope for performing skin examinations. There are many more examples of these new medical devices which stand to democratize the health care industry.
Now let's put all the pieces of the puzzle together. The vision that arises is one where health care will be connected and democratic. Data and information on individuals' health will be more easily captured, analyzed and shared by all parties. This big data sharing could revolutionize health care. Expensive health insurance ecosystems can function leaner, people who take care of their health will be rewarded, and have at their disposal tools to do so more easily. And last but not least developing countries will leap frog expensive health care set ups, by integrated and innovative information technology.
Taking care of your health will never have been easier, with an iPad Doc App and a small suitcase with medical instruments to connect to your tablet or smart phone. Information on your condition is shared, remedies dispensed online (pills advertised) and improvements tracked remotely. Your health improves and you pay less insurance premiums. A truly smarter and healthier world is born.
By Roel Castelein, GTM Strategy for EMEA at EMC

Sunday, February 26, 2012

Sometimes even Warren Buffett gets it wrong (but he can afford to!!)

By JOSH FUNK | Associated Press

FILE - In this Nov. 14, 2011 photo, Billionaire investor Warren Buffett speaks in Omaha, Neb., Monday, Nov. 14, 2011 at an event to raise money for the Girls Inc. charity organization. Buffett wants Berkshire Hathaway shareholders to know that the company has someone in mind to replace him eventually, but he's emphasizing that he has no plans to leave. Buffett offered a couple new details about Berkshire's succession planning in his annual shareholder letter Saturday, Feb. 25, 2012. Investors have long worried about who will replace Berkshire's 81-year-old CEO. (AP Photo/Nati Harnik)
Enlarge Photo

FILE - In this Nov. 21, 2011 file photo, U.S. billionaire investor Warren Buffett, chairman and CEO of Berkshire Hathaway, speaks during a news conference in Iwaki city. Japan. Buffett wants Berkshire Hathaway shareholders to know that the company has someone in mind to replace him eventually, but he's emphasizing that he has no plans to leave. Buffett offered a couple new details about Berkshire's succession planning in his annual shareholder letter Saturday, Feb. 25, 2012. Investors have long worried about who will replace Berkshire's 81-year-old CEO. (AP Photo/Shuji Kajiyama, File)

OMAHA, Neb. (AP) — The Oracle of Omaha earned his nickname — and more than a few billion dollars — by spotting investments that others overlooked, but Warren Buffett makes mistakes.

No, really, he does.
Just pick through Buffett's annual letters to shareholders of his conglomerate, Berkshire Hathaway. His pronouncements are eagerly anticipated by investors around the world. But sometimes even the Oracle gets it wrong.

By the second page of this year's letter, released Saturday, Buffett was borrowing a tennis term to take credit for "a major unforced error" he'd made on some Texas utility bonds.
Of course, Buffett's shareholder letters are filled with a lot more good decisions than bad ones. His $44 billion fortune attests to that. But the blunders are instructive. Or at least remind us that he's human.

The plainspoken, no-nonsense investor tends to be a good sport about his mistakes. Here are some of the lowlights.

The blunder: Buffett predicted in last year's letter that the U.S. housing recovery would begin within the next year and help fuel economic growth.
The explanation: Buffett doesn't mince words and says he was "dead wrong" about this one. But he says basic biology makes it unavoidable that the country will need more houses.
The quip: "People may postpone hitching up during uncertain times, but eventually hormones take over. And while 'doubling up' may be the initial reaction of some during a recession, living with in-laws can quickly lose its allure."

The blunder: Buffett spent about $2 billion buying bonds offered by Texas utility Energy Future Holdings. But those bonds are now worth about $878 million, and he conceded Saturday that even that could be wiped out.
The explanation: Buffett comes right out and admits misjudging the company's prospects and the likelihood that natural gas prices would remain depressed.
The quip: "However things turn out, I totally miscalculated the gain/loss probabilities when I purchased the bonds. In tennis parlance, this was a major unforced error by your chairman."

The blunder: Some of the companies Berkshire Hathaway has bought don't add much to the company's bottom line. Buffett didn't single out the laggards in Berkshire's manufacturing, service and retail unit, but he acknowledged that a few produce poor returns.
The explanation: Buffett says he misjudged some of these businesses before Berkshire bought them partly because he didn't always listen to curmudgeonly Vice Chairman Charlie Munger.
The quip: "I try to look out 10 or 20 years when making an acquisition, but sometimes my eyesight has been poor. Charlie's has been better; he voted 'no' more than 'present' on several of my errant purchases."

The blunder: In 2008, Buffett more than quadrupled Berkshire's stake in ConocoPhillips when oil and gas prices were near their peak. It cost the company several billion dollars.
The explanation: Buffett said he didn't anticipate the dramatic fall in energy prices that happened later in 2008.
The quip: "During 2008 I did some dumb things in investments. I made at least one major mistake of commission and several lesser ones that also hurt."

The blunder: Buffett has said that buying Berkshire Hathaway itself may have been his worst investment decision. It was a struggling New England textile mill when Buffett bought into it in the 1960s. He kept the mill running for 20 years before shutting it down.
The explanation: Buffett didn't recognize immediately that the textile business was doomed to continue losing money.
The quip: "The dumbest thing I could have done was to pursue 'opportunities' to improve and expand the existing textile operation — so for years that's exactly what I did," he said last year. "And then, in a final burst of brilliance, I went out and bought another textile company. Aaaaaaargh! Eventually I came to my senses, heading first into insurance and then into other industries."

Thursday, February 23, 2012

Re-think everything for mobile or you're toast

Photo credit: iStockphoto/jonya

Tech Sanity Check
Re-think everything for mobile or you're toast
By Jason Hiner | February 22, 2012, 9:22 PM PST

Before 2007, surfing the web on a mobile phone was a miserable experience. I remember trying it from BlackBerries, Palm Treos, and Windows Mobile devices, and being so frustrated by how slow and unusable it was that I was dying for the day when we'd be able to access the web from a mobile device just as easily as from a computer.

Obviously, that day is here. In fact, it's reached the point where most of us take it for granted and that's one of the big reasons why sales of smartphones surpassed PCs in 2011. This trend is accelerating so quickly that a lot of companies are going to be in danger of being disrupted if they don't adapt and re-think their customer experience for mobile.
Computers are about to get lapped
Let's take a quick step back.

When the iPhone arrived in June 2007, it was the beginning of sea change that turned smartphones into full-fledged Internet devices. While the first-gen iPhone was severely limited most of the time because it didn't have 3G mobile broadband, it reinvented the mobile user interface and when you used it on a Wi-Fi connection you could see that the future was having the full power of the web in the palm of her hand.
Before iPhone — and eventually Android and Windows Phone 7 — arrived, 90 percent of the systems that connected to the web were Windows PCs. It's hard to believe that was only five years ago.

In 2012, Gartner projects that worldwide PC sales will reach about 400 million units in 2012, while smartphones will surpass 600 million units. Tablets will sell about 100 million units. That means that only about 35% of the new devices sold this year that will be connecting to the web will be Windows PCs. That's how much the technology world has been turned on its head in just five years.

Now, remember that those Gartner numbers are only for new devices sold in 2012. So the overall percentage of Windows PCs accessing the web will still be over 50% in 2012, since there are obviously a lot of older machines still in use.

However, the numbers are going to get more dramatic in the years ahead. PCs are about to get dwarfed. By 2015, Gartner projects PC sales will grow to over 500 million, but tablets will triple to about 300 million and smartphones will leap past 1.1 billion.

Despite the fact that this massive sea change is about to come roaring in, the web continues to be a computer-centric place. While many types of workers and business professionals will use computers to design, build, and create content for the Internet for years to come, the primary access devices that the majority of users are going use to access the Internet will be smartphones and tablets.

The mobile re-think
While iOS, Android, and Windows Phone devices now offer a seriously-capable mobile Internet experience, the mobile web itself still hasn't take off its training wheels. In most cases, too many websites are still badly prepared to handle mobile visitors because of the way the sites use Flash, mouse-over animations, and other Javascript functionality that is designed solely for a user with a mouse. Unfortunately, when many of these sites offer a mobile version tailored for smaller screens that use multitouch, the mobile site usually doesn't include all of the functionality of the full site.

That's why user satisfaction with mobile sites is lower than the overall web, and it's why users have gravitated toward downloading native apps that are optimized for the mobile experience. The problem with that is it creates a bifurcated experience for companies because they end up developing a separate set of functionality for the web versus native apps for mobile devices. And since every mobile operating system has a different set of development tools, that means a company has to develop a different app for every platform, and try to keep them all unified and updated. That's impractical and unsustainable — and we haven't talked about the fact that companies now have to design separate apps for tablets.

This situation is not going to make sense much longer, because within a few years more people are going to be accessing the web from mobile touchscreen devices than from computers. The mobile web will simply become the web. That means every company that builds a website will need to rethink site design so that it's always friendly for both a big screen with a mouse and a touchscreen device. But, that's just the first part of the equation. Companies also need to reconsider their entire site experience for mobile, and think about what it could mean for customer service, mobile commerce, geolocational targeting, targeted deals and coupons, and much more.

The bottom line is that this isn't happening fast enough, and that's going to create a lot of opportunities for disrupters who can create better mobile experiences and use it to leapfrog incumbents. If you're not thinking about this now and planning for it, then you could be putting your business at risk. If your competitors have a smoother and more comprehensive mobile experience then it could give them an important edge with customers, especially since users have even less patience for slow site performance and a bad user experience when it comes to mobile.
Of course, this goes for TechRepublic too, but it's not just for Internet businesses. Every company or organization that has a website and a competitor needs to get serious about this because it's going to be a sea change on the same scale as the iPhone first bringing the capabilities of full web browsing to the phone — only this change isn't just going to disrupt smartphone makers, it's going to affect every kind of company imaginable.

Wednesday, February 22, 2012

The Perfect (Elevator) Pitch

It's a skill every businessperson needs.
How to create it, rehearse it, and tailor it for a specific audience

One of the most important things a businessperson can do—especially an owner or someone who is involved in sales—is learn how to speak about their business to others. Being able to sum up unique aspects of your service or product in a way that excites others should be a fundamental skill. Yet many executives pay little attention to the continuing development of "the elevator pitch"—the quick, succinct summation of what your company makes or does.

That's too bad, because the elevator pitch—so named because it should last no longer than the average elevator ride—is far too important to take casually. It's one of the most effective methods available to reach new buyers and clients with a winning message. True, you may not actually be doing the pitching in an elevator, but even if your meeting is a planned, sit-down event, you should still be prepared to capture your audience's attention quickly.

Keep It Fresh

Every business grows and changes, and your pitch needs to grow and change with it. You can have the most creative logo, the slickest slogan, the most dazzling brochures, and the most cutting-edge Web site, but if your elevator pitch is out of date, you're missing one of your most important opportunities to "brand."

You know your business better than anyone. How are you keeping abreast of the latest ideas? What continues to set you apart from your competition? How can you speak about your record of quality goods and services and make it relevant to your future plans?

As your audience's needs and expectations change, make sure you change the way you speak about your business. Your language, your approach, and what you choose to highlight for a particular audience has got to change over time.

For instance, what has worked in years past with print and broadcast audiences could bore an online audience to tears. You wouldn't think of not updating your other sales and marketing materials, so why would you let your elevator pitch grow stale?
Knowing your business, product, service, or issue well is one thing, but how do you convey excitement and spark interest to those outside your organization? What do you highlight? What do you leave out? And how do those choices change with your audience?

Always Be Prepared

In the early days of my executive coaching firm, I'd worked out an elevator speech with three quick points about what set our training services apart. It was working well, and I'd gotten comfortable, perhaps too comfortable, with using it.

One day, I won a brief introduction to a client in an industry we hadn't trained in before. After my standard elevator speech (in a hallway this time), this decision-maker smiled and said: "Frankly, lowest cost isn't necessarily our highest priority. I'd need to know a lot more about how you might add value to our existing efforts at training, not just your cost—and you'd need to convince me your firm could handle something we don't already offer our type of demanding professional."

He disappeared before I could recover. I didn't have another chance with him for almost a full year. When that time came, I'd made sure to learn all I could about the training his company already had in place and the precise value we could add to existing efforts.
I'd already taken the lesson to heart: Adjust the pitch to the person who is listening, and refine it as you and your business continue to grow and change. It worked, and we've since been able to win that valuable account and many others in the same industry.
I've been on the other side of the less-than-perfect pitch, too. At a conference, a young businesswoman approached me to introduce herself and her Web-building services. She was eager and confident, but after a few minutes of hearing about her competitive pricing, her creativity, and a few of her clients, I said: "Well I hear from a lot of design services, and it's hard to tell the real differences between you. What do you think really sets your work apart for someone like me in a services industry?"
The question obviously caught her off-guard, and she admitted she didn't have an answer. An honest answer, but not a first impression that achieved her goal of getting a second meeting.
Continually perfecting the elevator pitch ensures that you are always able to put your best foot forward as your business grows and changes and your client base expands.

Click here to view our slide show of tips for developing that "perfect pitch."

The Elevator Pitch

Uploaded by on Sep 21, 2006

The CBC's business reality series, Dragons' Den, is where contestants pitch their business ideas to 5 multimillionaire investors in an effort to acquire the funding they need to make their business come to life. The way to suceed is to master the "elevator pitch." Mentor Capitalist, Sean wise explains how.

Monday, February 20, 2012

Tip: How to secure your laptop data –

Shoppers check out laptops at a Best Buy store.

By Rob Pegoraro, Special for USA TODAY

Published: 2/19/2012 2:00:00 PM

Question of the week: Considering that laptops are sometimes lost or stolen, do you have any recommendations for securing confidential data on them?

Answer: The easiest way to keep your data safe is not to put it on a machine that somebody could easily walk away with.

Before everybody says "duh" all at once, think about all of the schools, businesses and government agencies that saved sensitive data on employee laptops that then got stolen. If those offices had stored that data securely on their own servers, accessible only via an encrypted online connection, the major cost of a stolen laptop would have been the price to buy a replacement computer.

On a personal level, this means using cloud-based apps instead of traditional, disk-bound programs — for example, using a Web-mail service instead of an application like Microsoft Outlook, or employing a Web personal-finance tool like Intuit's in place of a program like Quicken.

You should use secure passwords at these sites — see my column of two weeks ago for some advice on that point.

Your browser will offer to save passwords for many of those sites, but unless you have a separate password that must be entered before your browser logs you in anywhere — for example, Mozilla Firefox's master-password option— a thief could take advantage of that feature.

Separate password-management programs like the free LastPass offer one way around that. Another option, contrary to many password myths, is to write down those passwords on a piece of paper — which you then stuff in your wallet, something you already know to keep safe.

Having a strong password for your computer should block access to it (the reader who asked this question mentioned that he had a 12-character password, which is good enough). But even then, you can use widely available software to copy files off a computer without logging into it.

And unless you're running a Google Chromebook, you'll need to store some data on the machine.

In Windows, I suggest using the free, open-source TrueCrypt. ("Open-source" means that other programmers have had a chance to inspect and improve its code, so you don't have to trust the developers when they say they did their job). It's not the prettiest or the simplest app, but it will let you create a special folder that turns into gibberish once locked. You can even create a TrueCrypt "container" on a cloud-based storage service like Dropbox, then open its contents on any PC — or Mac — running TrueCrypt.

On a Mac, use the FileVault encryption built into Mac OS X— it's under the "Security & Privacy" heading in System Preferences— which automatically secures all of your data from anybody who doesn't have your computer's password.

Tip: Backup DNS can keep you online

The Domain Name System— the invisible switchboard your Internet provider runs to translate addresses like into numeric Internet Protocol identifiers like — is usually the least-dramatic part of Internet access. It works so well that you never know it's there, in the same way that you don't stop to think how calls reach your phone from any other phone in the world.

But on rare occasions, your Internet provider's "DNS" may go out of service; for example, Comcast suffered an hours-long DNS outage in November 2010. If that happens, you'll find that you can't get to anywhere on the Web —not even a site like Google that should stay up in any event short of a Mayan apocalypse. More annoying yet, the lights on your cable or DSL modem (and maybe even your provider's tech support) may still suggest everything's fine.

Two alternate services, Google Public DNS and OpenDNS, can back up your provider's domain-name service, connecting your computer to other sites when your ISP stops doing that job. At other times, they may also work slightly faster and provide a little extra security against malicious sites.

Rob Pegoraro is a tech writer based in Washington, D.C. To submit a tech question, e-mail Rob at Follow him on Twitter at

IT dept heads, read this

IT dept heads, read this
SECURE IT: Flexible work arrangements that encourage employees to work from home - or any location - make it difficult to control employee technology usage and IT departments need to develop policies to deliver and secure sensitive data on both IT-owned and employee-owned devices. -AP
THE last 24 months have brought an explosion of new devices, web applications, and social media platforms. With every new product release or social network launch, CIOs (chief information officers) are getting pressure from their employees, including senior executives, to open the corporate network to consumer devices and allow access to more of the Web. This migration of consumer devices like smartphones and Tablets into enterprise computing is making CIOs very nervous.
The risks to data security are obvious and real, and the loss of control compared to the days when IT departments could pick and choose technologies is distressing. Some CIOs are reacting with bans on the use of employee-owned devices, but this can be counterproductive. The consumerisation of IT is inevitable and likely good for business, even if it has caused some heartburn.
Why inevitable? There are five trends that have brought us to what I see as a point of no return on consumerisation.
Trend #1: The rise of social media as a business application. This phenomenon is the traditional enterprise IT killer, not just a killer app. For knowledge workers, social networks have become necessary and ideal tools for building work relationships and conducting business.
For example, Dell employed's Chatter to our more than 90,000 Dell employees. Being able to follow opportunities is a key feature of this application, so social connections literally mean sales connections. Employers need to facilitate this type of social collaboration, not be threatened by it.
Trend #2: The blurring of work and home. According to a telecommuting forecast by Forrester, 41% of employers plan to implement telecommuting options this year and 43% of the US workforce ­­­­- more than 63 million workers - will telecommute occasionally by 2016.
Flexible work arrangements that encourage employees to work from home - or any location - make it difficult to control employee technology usage. IT departments need to develop policies to deliver and secure sensitive data on both IT-owned and employee-owned devices.
Trend #3: The emergence of new mobile devices. The mobile era has arrived; by next year global smartphone shipments will exceed personal computer shipments for the first time in history. In the wake of such a seismic shift, employees are showing up to work with their personal devices with increasing frequency. The pressure on IT departments to provide service and support for the employees' devices and applications of choice will be enormous.
Trend #4: Shifting business models require tech-savvy employees. Put the rise of social media together with e-commerce and mobile devices, and you get a marketplace in which word of mouth influences buying decisions as much as half the time.
According to McKinsey and Co, "word of mouth is the primary factor behind 20% to 50% of all purchasing decisions." As the control of corporate brands shifts to online conversations outside of the company's purview, organisations will increasingly value employees who can navigate the ecosystem and are influencers in their social networks.
Trend #5: Employee expectations of corporate IT are changing. Desirable hires don't want to give up their devices, weakening the recruitment and retention abilities of companies who refuse to accommodate them. Imagine how a 2011 college grad reacts when she arrives at her new desk and turns on her PC to discover that it's running a locked-down version of an operating system that was first released when she was 12.
Set a strategy
As these trends collide, consumerisation moves from being something we all have talked about for years to a crucial a business decision. And savvy CIOs are making this a business issue because technology is becoming a talent issue.
From recruiting and employee satisfaction, to driving brand reputation, to enabling new business models, employee technology a business issue, not an IT policy debate.
Today's consumerisation trends are yet to peak, which means that the pressure for change in most IT organisations will only intensify. Businesses who react thoughtfully and decisively now will reap benefits for the rest of the mobile era and beyond. How?
Articulate your company's end user workplace and technology philosophy and use that as a basis for setting a consumerisation strategy. Recognise that IT security and data protection policies that restrict the use of personal devices and social media applications may actually increase security and data loss risk.
Begin evolving security policies to protect data in a workplace whose employees are using a variety of devices and applications.
Liberalise rules that prohibit business use of employee-owned technology in your own environment, starting with smartphones.
Launch enterprise applications that mimic the best aspects of consumer communication and social media within your worker community. Pilot company-paid or employee-owned Tablets with field workers and executives to see if they can replace other devices.
Communicate a clear point of view on company versus employee cost-sharing. Develop a business case for incremental investment by linking end user technology strategy with human resource planning, facilities planning, and business strategy.
Consider desktop virtualisation and other new technologies to reduce security and data loss risks as the demand for consumerisation grows. Confront the software licensing implications of consumerisation to ensure compliance.
Finally, avoid end-user stipends; the goal is to allow employees to use the devices they already prefer, not to shift the purchasing decision onto them.
The heart of the consumerisation trend is human desire; people want to work the way they live, using the Internet to facilitate relationships and communication.
It's also the foundation for the next wave of business. Companies that adapt quickly and thoughtfully to change the relationship between employees and the IT department will be better able to attract talent, execute new business models, and enhance competitiveness. So why fight it?
Varinderjit Singh is managing director Consumer, small medium business Dell Malaysia and Singapore.

Planning principles for IT comfort, cost control

Planning principles for IT comfort, cost control
DIFFICULT DECISIONS: In tough economic times, pressure on budgets across all areas of enterprises is demanding hard choices in IT investment. - ConocoPhilips
IN TOUGH economic times, pressure on budgets across all areas of enterprises is demanding hard choices in IT investment, made even more problematic by the exciting possibilities - and potential problems - of new storage architectures.
In spite of the economic climate, there are four key, timeless strategic principles that will hold true through any macro crisis or technological leap forward, allowing enterprises to identify and measure costs, reduce capital expenditure, rein in operating expenses, and grow sustainably.
Virtualisation is the catchword for the new era of outsourced IT services. New storage architectures are coming online at an increasingly rapid rate. Cloud computing, virtual machine (VM) sprawl, capacity-on-demand architectures and other architectures demand a review of existing storage infrastructures, prices, costs and operational methods.
Cloud storage, with its promise of lower costs, and businesses freed to focus on their own core capabilities and priorities, is one of the prime contenders. However, enterprises still need to plan and account for their entire IT architecture costs, including virtualised and cloud-based components, in order to achieve true efficiencies and real ROI (return on investment).
Treating IT services or cloud as just another utility alongside water and electricity has to take account of the fact that a company's IT infrastructure is not a fixed cost for a fixed supply, like the gas or electricity mains: It is a dynamic, constantly changing expression of the company's business processes and information flow.
Four principles of storage economics
Storage economics allows us a financial and economic view of storage decisions and architectures. A very common question, and a very real problem, that IT people hear is "we have to reduce costs." CFOs say "reduce costs" while CIOs care about the technologies, that is, they want more "blinking lights." Most of the time, the solution to these conflicting demands is to buy cheaper disks, servers and applications. Therein lies the problem - they confuse price and cost.
Purchasing low-cost storage solutions does NOT equate to lowering operating expenses or reducing the total cost of storage ownership (TCO).
For companies considering cloud computing and virtualisation, the benefits to the business can be huge, but IT management needs to realise just what it can - and cannot - virtualise, and heads of companies need to abandon the mentality that identifies cost just with CAPEX, otherwise they will come to a cloud-based strategy with wrong expectations about what it can achieve for the bottom line.
They also need to be wary, if they are already using third-party service or infrastructure providers, of whose costs are being saved - their own or the provider's.
IT management should first identify the costs of the existing storage architecture, measure the costs, decide what levers would impact these costs, and then pinpoint who exactly will benefit from the cost savings.
Just transferring costs instead of actually reducing them obviously brings little benefit, and this is one reason why you need to look at all the cost savings and other benefits that could accrue, before taking a leap into the clouds. You need to think like an economist, talk like an accountant, and act like a technologist when driving these changes.
The following four principles provide a timeless framework which can help you make strategic and tactical investments to gain business benefits and successfully control the costs of your storage infrastructure in the long run.
Applying these four principles when any new architecture is considered will help separate the price hype and the long-term operational costs associated with every new architecture.
1) Cost of ownership includes more than price
Remember that price does not equal cost. CAPEX on IT infrastructure has steadily diminished as a share of the total cost of ownership (TCO) of a data storage system over the past few years, from 50-60% to about 20% today.
The other 80% is mostly operating expenditure (OPEX), including labour, maintenance, and power. In fact, unit cost of disc space can be seen as approaching zero, but recurrent running costs of a system are becoming the largest and most critical item of IT TCO. These are the costs that managers need to identify and rein in.
2) There are 34 types of cost
On the principle that "you can't improve what you can't measure," management needs to identify and quantify the actual costs that make up its TCO. Collectively, 34 types of costs have been identified (see attached table) that make up storage TCO some hard or direct, some soft or indirect, some CAPEX, others OPEX.
Not all apply equally to every business, and a cloud-based strategy may reduce only some - it could increase others.
For instance, an imperfectly-executed cloud-based strategy could increase litigation risk costs, or security and encryption costs, while even leaving storage management labor unaffected.
Out of the 34 costs below, hardware depreciation, maintenance and warranty, storage management, power consumption, monitoring, and datacentre floor space - should be directly reduced by a cloud-based solution; software purchase/depreciation and software maintenance could also be potentially reduced.
One should note however, all the other costs could be increased under a cloud-based strategy - if not implemented correctly.
Equally, some of the most important cost savings and advantages can be missed through old-fashioned reflection and thought. The cost of growth is one item where a good cloud-based strategy can truly reap rewards, enabling businesses to scale quickly and expand their IT establishment at a fraction of the traditional outlay.
Businesses too wedded to planning on the basis of their existing needs and infrastructure may overlook benefits for the future.
IT planners need to take a full 360-degree look at all these 34 costs, decide which are most critical for their business processes - present and future - and use cloud-based and in-house resources accordingly.
Costs need to be categorised by costs kept internal after the cloud transformation, and the costs that get transferred to the cloud provider.
3) "Cheaper to Own" - Economically superior storage architecture creates value
The best-value storage architecture is the one that saves most, not the one that costs least. Moving costs off the internal or CAPEX balance sheet may simply load them onto other areas, if a cloud-based system is not implemented correctly.
Storage TCO has to be calculated for the long term, as high operating costs will soon start to indicate. This architecture may not be the cheapest to buy, but it should be cheaper to own, and IT planners need to carefully consider how cloud components will actually affect TCO. Some of the key ingredients include:
Virtualisation of volumes, file systems, storage systems;
Dynamically tiered storage;
Intermix storage;
Thin provisioning;
Power down disk, MAID;
Multiprotocol SAN storage;
De-duplication, data compression;
Integrated archive; and
Management, policy-based provisioning
Storage virtualisation, together with tiered storage and dynamic (or thin) provisioning - has been repeatedly demonstrated to reduce TCO by 20-35% over older in-house or tiered-island storage architectures.
Out of the 34 types of cost, the ones which are most affected are waste, migration, copies and labour.
4) Econometrics will show you the way
Econometrics - an economic measuring system to quantify costs and track progress in reducing them - has to be put in place alongside good storage architecture, to map storage initiatives or investments to areas of measurable costs, and use this information to design, prioritise, and roadmap activities, based on their projected potential to reduce costs and support business needs.
Cloud-based systems in particular need econometrics to keep track of costs that are now often out-of-house and not readily identifiable by traditional internal accounting.
Obviously, there is a risk of getting lost in the cloud - losing not only data, but also money. A short-sighted focus on slashing CAPEX may only increase this risk, as technological change is clearly pushing IT towards longer-term time horizons in calculating TCO.
As IT departments find themselves caught between the two imperatives of technological advances and cost pressures, managers need to decide the basis for handling and balancing these imperatives.
The Four Principles of Storage Economics at least give a sound foundation for a storage architecture that really can rise above the clouds.
(Johnson Khoo is managing director of Hitachi Data Systems Malaysia)
The 34 types of cost
1. Hardware depreciation and leases
2. Software purchase or depreciation
3. Hardware maintenance or warranty
4. Software maintenance or warranty
5. Storage management - e.g. labour costs for provisioning, tuning, load balancing, troubleshooting and upgrades
6. Backup and disaster recovery - e.g. labour costs for backups and restores, disaster recovery planning and testing
7. Data migration, re-mastering labour costs
8. Data mobility - time and effort to move data to different tiers or archive solutions during the data lifecycle
9. Power consumption and cooling costs
10. Monitoring - SNMP, NOC and operations consoles for the storage, SAN and backup infrastructures
11. Datacentre floor space
12. Provisioning time - business impact of the time needed from when the request is made until capacity is presented to the host
13. Cost of waste (Two types: Usable and not allocated, and allocated and not used)
14. Cost of copies - database management systems (DBMS) and other applications often require multiple copies to be made
15. Cost of duplicate data - overhead associated with several copies of the same data
16. Cost of growth - every storage architecture has a cost of growth. In high-growth environments with the wrong architecture, the cost of growth can be acute
17. Cost of scheduled outage (microcode changes, capacity upgrades)
18. Cost of unscheduled outage (machine related)
19. Cost of unscheduled outage (people and process related)
20. Cost of disaster risk, business resumption
21. Recovery time objective and recovery point objective (RTO and RPO) costs - business impact costs resulting from the time it takes to return to a recovery time (or point) after a system failure or backup recovery
22. Data loss
23. Litigation, discovery risk - legal risk and e-discovery time costs associated with lawsuits
24. Reduction of hazardous waste - primarily an EU cost due to regulations such as RoHS. Noncompliant hardware may incur an additional tariff for disposal of the asset
25. Cost of performance - impact to the business (good or bad)
26. Backup infrastructure - includes backup servers, media servers, tape libraries, drives, etc
27. Backup media - local and remote media costs for backup; recurring and capacity related costs
28. Cost of risk with backup windows - business impact of shortened or limited backup windows
29. CIFS- and NFS-related infrastructure - filers, gateways and the necessary software to provide file servers and shared services in the enterprise
30. Local and remote data circuits - dark fibre used for SAN extensions, remote replication and the associated software
31. Storage area networking - dedicated Fibre Channel, iSCSI or NAS connection infrastructures. This includes routers, gateways, host bus adaptor switches and directors
32. Noncompliance risk (archive, data retention) - several legal and legislative requirements (HIPAA, Basel II, Sarbanes-Oxley, carbon emissions), non-compliance with which can incur fines, negative publicity and criminal prosecution
33. Security and encryption costs for protecting data and storage infrastructure
34. Procurement - costs associated with time and effort required to acquire hardware and software, including preparation, review, negotiation, selection and certification