free counters

Thursday, March 31, 2011

Vodafone buys out Indian owner in mobile joint venture

By John Ribeiro

March 31, 2011 06:24 AM ET

IDG News Service - Vodafone said on Thursday it will buy Essar Group's stake in a local subsidiary, giving it the maximum allowable equity share of a mobile phone company allowed by Indian law.
The deal resolves a dispute between Essar and Vodafone over the value of the stake.
Vodafone bought directly and indirectly through Indian partners a 67 percent stake in Vodafone Essar in 2007 from Hutchison Telecommunications International. Essar held the remaining 33 percent.
Under an agreement between Vodafone and Essar signed at the time, Essar had an option to sell its 33 percent share in Vodafone Essar to Vodafone for US$5 billion, or an option to sell between $1 billion and $5 billion worth of Vodafone Essar shares to Vodafone at an independently appraised fair-market trading value.
Vodafone said Essar Group has exercised its underwritten put option over 22 percent of Vodafone Essar. Following Essar Group, Vodafone exercised its call option over the remaining 11 percent of the joint venture, resulting in a total cash payment of $5 billion.
Final settlement is expected no later than November, Vodafone said. Vodafone's published net debt figure already includes the $5 billion payment, it added.
Indian rules currently limit foreign equity to a maximum of 74 percent of a telecommunications services company. Vodafone, which currently directly holds 42 percent of the equity in the company, will after the acquisition hold 75 percent of the equity, a Vodafone spokesman said.
However, to comply with government rules, Vodafone will likely induct a local partner to hold about 1.4 percent of the equity to be compliant, the spokesman said. Indian partners hold the remaining 24.6 percent of the equity.
Vodafone's investment in India has run into rough times. The company took a £2.3 billion (US$3.3 billion) charge for the Indian operation in May last year, citing "intense price competition."
The company also faces a US$2.5 billion bill from India's Income Tax department, which holds that under Indian rules Vodafone should have deducted tax in 2007 before paying Hutchison.
India's income tax rules require that tax should be deducted before a payment is made to a foreign company or non-resident for assets in India. Vodafone contests the claim, saying that it is not liable to pay tax on the transaction that was executed outside the country by two foreign companies.
Essar did not comment on the Vodafone statement about the sale of its stake in the joint venture, although some sources confirmed the deal in private.
Earlier this year, Essar proposed merging Essar Telecommunications Holdings, the company that holds part of the stake in Vodafone Essar, with India Securities Ltd (ISL), a listed Essar Group company, to determine the true market value of its stake.
Essar said it adopted this route as Vodafone had blocked an initial public offering of Vodafone Essar last year. Vodafone opposed the merger proposal, saying that it could artificially increase the price Vodafone may have to pay to buy out Essar's stake in the joint venture. It said it had no objection to Essar holding an initial public offering for its stake in the joint venture.
The merger proposal may now have to be withdrawn by Essar, according to a source close to the situation. The merger proceedings are scheduled for a hearing on Friday at the Madras High Court, the source said.

John Ribeiro covers outsourcing and general technology breaking news from India for The IDG News Service.

Follow John on Twitter at @Johnribeiro. John's e-mail address is

Sent from my BlackBerry® smartphone powered by U Mobile

Wednesday, March 30, 2011

Four signs your consulting project will fail | TechRepublic

IT Consultant
Four signs your consulting project will fail
By Chip Camden | March 28, 2011, 11:38 AM PDT

Several years ago, I took on a project from a client who had given me pretty steady work for 10 years preceding. The idea of the project was to create a framework for the client's future software development, bridging between their existing technology and the platform to which they wanted to move: Microsoft's .NET Framework. We laid out a strategy, and I started work on prototyping and designing.

Since then, at least two generalized alternatives to this problem have emerged (both of which I helped develop), but at that time we were breaking new ground. Scoping the project was difficult, and it dragged on and on. Several months into it, we identified performance issues that added more time to address. Eventually, the client decided that the effort and expense wasn't going to yield the result they wanted, and they scrapped the project.

My relationship with the client remained friendly, but they haven't used me for anything since. I can't say that I blame them, if they were half as disgusted by that project as I was. I still feel a little sick in my stomach when I recall it. But a greater tragedy would result if I committed it to the depths of Lethe, because failures can teach us more than successes, if we're willing to learn from them.

This project exhibited these four signs of imminent failure that I should have noticed.

1: The project is running longer than a year.
Except in rare cases, if a project extends over more than a year, it has already failed. What's so magical about that year boundary? Funding usually follows an annual cycle, so it may be difficult to keep resources dedicated for longer than a year without delivering something. But it's mostly psychological. Project success depends more on human factors than on anything else. When the calendar rolls around to the same place where you began the project, and it's not even close to being finished, few of the participants can avoid thinking "This project will last forever." Their focus begins to shift from "How do we get this done right?" to "How do we get out of this?" (or worse, "How do I get out of this?"). Once people start looking for the exits, you can count on a rout ensuing.

2. The project is monolithic.
You can more easily avoid failure if your definition of success is easier to achieve. When a project must accomplish every one of several difficult objectives, it's set up for failure. Rather than adopting a pass/fail mentality, organize the project into discrete objectives, each of which may stand alone as a successful operation. By eliminating interdependencies, you can reap the benefit of five successes out of six attempts, instead of losing everything because of one failure. You can also decide in advance to drop one or more of those objectives if you get into a pinch, without sacrificing the entire project. Furthermore, if you deliver early and often, the client can provide feedback that may result in course corrections before you get too far along to change directions.

3: The project is not unified.
At first, this may seem to contradict reason number two, but what I mean here is that the project needs one team working together. Multiple teams, especially if they're in different locations, require extra attention to coordination and communication. I do almost all of my work remotely, so that arrangement certainly can work, but the connection to the rest of the team is often the weakest link in my projects. Of the failed projects in my history, lack of adequate communication and coordination has always played a significant role in their demise.

4: The project is not the #1 priority.
A successful project may start out as a "when you have time to look at this" activity, but at some point it has to become your primary focus or it will never get done. A project doesn't become a #1 priority for a consultant until it also becomes a #1 priority for the client; if that doesn't happen, then eventually the client will decide to stop pouring money into it and divert those funds towards more pressing needs.
For the project in question, I should have insisted on breaking it down into smaller chunks that could have been delivered no more than two months apart — preferably less than a month apart. I should have hounded the client for feedback on those deliverables, and I should have refused to put more time into the project until I received it.

This experience made me a better consultant, but I still regret disappointing my client. Memories of that project have often haunted my occasional relapses into the Impostor Syndrome. I have to remind myself that even the most successful people must endure occasional failure.

Have you ever experienced any spectacular project failures? What did you learn from them?

Sent from my BlackBerry® smartphone powered by U Mobile

oogle launches online magazine, Think Quarterly

oogle launches online magazine, Think Quarterly

Dan Nystedt | March 24, 2011

TAIPEI, 24 MARCH 2011 - Google has launched its own quarterly online magazine, Think Quarterly, out of its operations in the U.K. and Ireland, saying that "in a world of accelerating change, we all need to take time to reflect."

The first issue of Think Quarterly is already freely available online and is dedicated to data, including data obesity, data impotence, data overload and open data.

"Think Quarterly is a unique communications tool that brings together some of the world's leading minds to discuss the big issues facing businesses today," the magazine says on its Twitter bio.
The magazine's Twitter feed says it launched on March 21, though there is no mention of the magazine on Google's blog, Twitter feed, Facebook page or newsroom.

In a note on the magazine's website, the managing director of Google's U.K. & Ireland Operations, Matt Brittin, said, "Think Quarterly is a breathing space in a busy world. It's a place to take time out and consider what's happening and why it matters."

The reason the company dedicated the first issue to data was to figure out "amongst a morass of information, how can you find the magic metrics that will help transform your business? We hope that you find inspiration, insights, and more, in Think Quarterly," he said.

The next issue of Think Quarterly will be available in May, followed by the third issue in July and the fourth in October.

Sent from my BlackBerry® smartphone

89 per cent of mobile owners unaware of risks of using handset for banking

89 per cent of mobile owners unaware of risks of using handset for banking

Carrie-Ann Skinner | March 23, 2011

LONDON, 23 MARCH 2011 - Nearly nine in ten (89 percent) mobile phone owners are unaware some smartphone apps can transmit personal details such as credit card numbers without the user's knowledge or consent, says AVG.

Research conducted in the US by the security firm, in conjunction with the Ponemon Institute, also revealed 91 percent were unaware apps can be infected with malware specifically designed to steal banking details.

Furthermore, 29 percent admitted they store details regarding their credit and debit cards, such as card numbers or account details, on their handset, while 35 percent store 'confidential' work documents on their mobile phone. Nearly three in ten (28 percent) 28 percent claimed to be unaware that using their smartphone for business and personal reasons can put business information at risk.

AVG also said over half (56 percent) of mobile phone owners did not know that failing to log-off properly from a social networking app could result in a hacker being able to post content to their profile or change personal settings without their knowledge. Meanwehile, over a third (37 percent) said they were unsure whether their social networking profile had been manipulated because of this.

"The findings of this study signal what could be an overlooked security risk for organisations created by employees' use of smartphones," said Dr. Larry Ponemon, chairman and founder of Ponemon Institute.

"Because consumers in our study report that they often use smartphones interchangeably for business and personal, organisations should make sure their security policies include guidelines for the appropriate use of smartphones that are used for company purposes."

J.R. Smith, CEO, AVG Technologies, said the "mobile internet does not have to be a risky environment" and mobile phone owners could stay safe downloading low-cost or free antivirus products specifically designed to protect mobile data.

AVG already offers its own free security softwrae aimed at Android smartphone, AVG Antivirus Free. However, the security firm has launched a new version of the software speicifically for tablet PCs. The software, which can be downloaded from the Android Market, scans apps, settings, data and media files in real time looking for viruses and other malware

"Mobile devices are more personal than your computer at home, as it goes next to your wallet and your house keys and contains relevant data, your contacts, your family photos and memories" said Omri Sigelman from AVG.

Sent from my BlackBerry® smartphone powered by U Mobile

ampm highlights foodservice through Facebook app - Business Focus

Yesterday (29 March 2011)
ampm highlights foodservice through Facebook app

STUDIO CITY, Calif. -- Customers of ampm convenience stores aren't limited to the food items listed for sale; with ampm's "Secret Menu" app, they can create their own tasty combinations, In-Store Marketing Institute reported.

The app showcases combinations of items from ampm's prepared foods menu with descriptions of how to make them. Items include the fairly traditional, like the "Tickled Pickle," a cheeseburger loaded up with as many pickles as possible plus a few other toppings, along with the more unusual, like the "Bear in the Cave," a sandwich made with a Klondike Bar and two doughnuts. Customers can submit their own creations and comment on each other's.

The app, which ampm describes as "too much good stuff brought to a whole new level" on the company's Web site, debuted last summer.

Sent from my BlackBerry® smartphone powered by U Mobile

Solo Iranian hacker takes credit for Comodo certificate attack

Solo Iranian hacker takes credit for Comodo certificate attack

Gregg Keizer | March 28, 2011

FRAMINGHAM, 28 MARCH 2011 - A solo Iranian hacker on Saturday claimed responsibility for stealing multiple SSL certificates belonging to some of the Web's biggest sites, including Google, Microsoft, Skype and Yahoo.

Early reaction from security experts was mixed, with some believing the hacker's claim, while others were dubious.

Last week, conjecture had focused on a state-sponsored attack, perhaps funded or conducted by the Iranian government, that hacked a certificate reseller affiliated with U.S.-based Comodo.

On March 23, Comodo acknowledged the attack, saying that eight days earlier, hackers had obtained nine bogus certificates for the log-on sites of Microsoft's Hotmail, Google's Gmail, the Internet phone and chat service Skype and Yahoo Mail. A certificate for Mozilla's Firefox add-on site was also acquired.

SSL certificates validate the legitimacy of a Web site to the browser, assuring users that they're connecting to the real site, and that the traffic between their browsers and the site is encrypted.

Comodo CEO Melih Abdulhayoglu said last week that circumstantial evidence pointed to a state-backed attack, and claimed the Iranian government was probably behind it . "We believe these are politically motivated, state driven/funded attacks," said Abdulhayoglu.

He based his opinion on the fact that only Iran's government -- which could jigger the country's DNS (domain name system) to funnel traffic through fake sites secured by the stolen certificates -- would benefit.

In Abdulhayoglu's analysis, authorities could have used the certificates to dupe anti-government activists into believing they were at a legitimate Yahoo Mail, for example. In reality, however, the phony sites would have collected users' usernames and passwords, and thus given the government access to their e-mail or Skype accounts.

On Sunday, a single hacker took responsibility for the Comodo attack, backing up his claim with decompiled code.

"I'm not a group of hacker [sic], I'm single hacker with experience of 1,000 hackers," wrote the attacker in a post on late Saturday. He called himself "ComodoHacker" and said he's 21 years old.

ComodoHacker alleged that he had gained full access to, the Italian arm of Comodo's InstantSLL certificate selling service, then decompiled a DLL file he found on its server to uncover the reseller account's username and password.

With the username and password in hand, said ComodoHacker, he was able to generate the nine certificates, "all in about 10-15 minutes." His message was signed "Janam Fadaye Rahbar," which reportedly means "I will sacrifice my soul for my leader."

The Web site is currently offline.

Robert Graham, the CEO of Errata Security, believes ComodoHacker is telling a straight story.

"As a pentester who does attacks similar to what the ComodoHacker did, I find it credible," Graham said Sunday on the Errata blog . "I find it probable that (1) this is the guy, (2) he acted alone, (3) he is Iranian, (4) he's patriotic but not political."

But Mikko Hypponen, the chief research officer of Helsinki-based F-Secure, sounded skeptical.

"Do we really believe that a lone hacker gets into a [certificate authority], can generate any cert he wants...and goes after instead of" asked Hypponen on Twitter .

Graham had an answer for Hypponen's question.

"[Comodo Hacker] started with one goal, that of factoring RSA keys, and ended up reaching a related goal, forging certificates," said Graham. "He didn't think of PayPal because he wasn't trying to do anything at all with the forged certificates."

ComodoHacker also lit into the West -- Western media in particular -- for quickly concluding that the Iranian government was involved when it had downplayed possible U.S. and Israeli connections to Stuxnet, the worm that most experts believe was aimed at Iran's nuclear program .

He also threatened to unleash his skills against those he said were enemies of Iran.

"Anyone inside Iran with problems, from fake Green Movement to all MKO members and two-faced terrorists, should [be] afraid of me personally," said ComodoHacker. "I won't let anyone inside Iran, harm people of Iran, harm my country's Nuclear Scientists, harm my Leader (which nobody can), harm my President."

MKO, or the "People's Mujahedin of Iran," is an Islamic group that advocates the overthrow of the current government of Iran.

"As I live, you don't have privacy in internet, you don't have security in digital world, just wait and see," ComodoHacker said.

Sent from my BlackBerry® smartphone powered by U Mobile

Cybercriminals selling exploit-as-a-service kit

ECGMA says: New acronym now, EaaS aka Exploit as a Service!

Cybercriminals selling exploit-as-a-service kit

Jeremy Kirk | March 28, 2011

LONDON, 28 MARCH 2011 - Cybercriminals are taking a page from the software-as-a-service playbook: they're now selling exploit kits complete with hosting services, with customers paying for the length of time the exploits are actively infecting computers.

The kits offer several kinds of exploits, or sequences of codes that can take advantage of software vulnerabilities in order to deliver malware. Researchers from Seculert have found that at least a couple of these kits -- Incognito 2.0 and Bomba -- offer their own Web hosting as well as a Web-based management interface.

The new business model makes it easier for cybercriminals who may have trouble securing so-called "bulletproof" hosting, or ISPs willing to host malware servers, said Aviv Raff, CTO and cofounder of Seculert. The company specializes in a cloud-based service that alerts its customers to new malware, exploits and other cyberthreats.

The whole package is intended for criminals who want large numbers of hacked computers running Microsoft Windows. Once the computers are hacked, the machines can be used to steal personal data, send spam, for conducting denial-of-service attacks or other purposes.

It's also cheaper. The criminal clients pay for only the time that their exploits are live, meaning if for any reason the ISPs shut down the rogue servers, they don't have to pay, Raff said. He estimated the malware hosting and exploit service to cost between US$100 to $200 a month.

"This is all managed by the 'service provider,'" Raff said. "You as a customer only pay for the time the exploits are hosted. We don't have the exact numbers, but like any other 'cloud-based' service, it's reasonable that the account is much more affordable than buying the kit and hosting it."

The clients must provide their own malware for the exploits to deliver. They must also hack websites in order to redirected victims to the crimeware servers hosted by Incognito's operators. When a potential victim visits one of the infected websites, an iframe is displayed that brings in content from Incognito servers that starts trying to hack the machines with various exploits.

So far, Seculert counted about 8,000 legitimate websites that have been hacked and are pulling exploits hosted by Incognito. Some victims are infected when they go to those sites through normal browsing, Raff said. But the hackers have also used spam campaigns to try to draw people to infected sites.

One of those recent spam messages noticed by Seculert purported to be a support message from Twitter, which encouraged people to click on a link that was supposedly a video from near the ailing Japanese nuclear reactors in Fukushima damaged in the tsunami earlier this month. If a person clicked on the link, no video was displayed, and a Trojan downloader was installed on the computer if it had a software vulnerability.

Incognito 2.0 provides a Web-based management interface where clients can check how many computers have been infected and what type of exploit was used, Raff said. Seculert posted screenshots on its blog.

Incognito 2.0 seems to be a growing business: Seculert's researchers found after analyzing its infrastructure that at least 30 clients were using it to install everything from the Zeus banking Trojan to fake antivirus software to generic Trojan dowloaders that can install other malware on an infected computer.

At least 150,000 machines were infected during a two-week period in January. About 70 percent of those computers were infected using an exploit for a vulnerability for the Java runtime environment, with 20 percent infected via an Adobe Reader vulnerability.

Sent from my BlackBerry® smartphone powered by U Mobile

SingTel unveils hybrid cloud for enterprises

SingTel unveils hybrid cloud for enterprises

Jack Loo | March 29, 2011

SINGAPORE, 29 MARCH 2011 - Singapore telecom company SingTel and virtualisation software provider VMware have unveiled a hybrid cloud computing offering called PowerON Compute.
PowerOn Compute allows companies to expand the resources of their private cloud infrastructure into SingTel's secure public cloud. VMware's vCloud Datacenter Services offers a common management platform that allows the seamless transfer of workloads between the two clouds.
"Hybrid clouds provide customers with freedom of choice that will eventually give rise to applications that can be 'dragged and dropped' between clouds and delivered securely to any device, anywhere, at any time," said Paul Maritz, CEO, VMware.
PowerOn Compute promises to allow companies to turn on computing resources "just like they turn on water from a tap", said Bill Chang, executive vice president of Business Group, SingTel.
The two companies said that cloud computing enables organisations to upgrade IT resources without having to deal with heavy costs and complexities of purchasing and managing additional servers and systems, therefore reducing operating costs by about 70 per cent.
Service reliability is currently being offered up to 99.99 per cent, according to Chang, and will be pushed higher to 99.999 per cent in the future.
Chang also clarified that PowerOn Compute is targeted at enterprise-sized customers while the cloud services offered by SingTel subsidiary Alatum are aimed at small and medium businesses (SMBs).
The partnership was first announced in September last year when SingTel became the first telco to sign up with VMware's vCloud Datacenter Service programme.
PowerON Compute will be available on trial service beginning April.

Sent from my BlackBerry® smartphone powered by U Mobile

Tuesday, March 29, 2011

Mobile devices: the new disaster recovery frontier

By Linda Tucci, | Mar 23, 2011

 As more and more data are being stored on mobile devices, disaster recovery planning in the mobile area is becoming more critical in ensuring business continuity.

The proliferation and ever-increasing diversity of workplace mobile devices -- company-issued and employee-owned -- will push CIOs to reconsider their IT disaster recovery and business continuity plans, experts say. Reducing the risks associated with workplace mobility also will drive technology purchases, from mobile device management (MDM) tools to desktop virtualization.

"Executives are dragging documents through iTunes and onto their iPads. They are editing them with something like QuickOffice or Documents To Go, or Apple's Keynote and Pages products. The documents are being modified and shared, and the data stores completely cache-forwarded out there into the field; nobody is thinking about how to get them back," said Bill French, a Denver-based IT consultant and software developer. "So, the cart is definitely in front of the horse on this one for most organizations."

Mobility in the workplace is a top concern for CIOs, with good reason. An average 44% of employees are carrying a company-owned mobile device, according to The Nemertes Research Group Inc.'s latest IT Benchmark, an annual study of more than 200 organizations spanning 18 vertical industries. That number is projected to go to 70% by 2012. Moreover, at 11% of the organizations studied, employees rely 100% on smart devices for communications -- and that's just the company-issued devices.
Add to this new reality the growing trend of allowing employees to use their own smart devices, and suddenly mobility is not only a Tier 1 service for IT departments, but also wildly out of IT departments' control.

"Now you have the risk of corporate data leaking out into the personal side of the device. And if you do implement backup and recovery for the smartphone, what do you do when it is a personal device?" said Ted Ritter, senior research analyst at The Nemertes Research Group Inc. in Mokena, Ill."The employee certainly doesn't want you to back up their personal data to the corporate server," he said. Or wipe out three years of family photos, if the device is lost.

Companies that have dealt effectively with this conundrum work with their lawyers to craft an acceptable use policy for employees to sign; that's a legal process that can take as long as a year, Ritter said.

Such policies typically state that if a company needs to wipe the device clean or confiscate it for reasons of e-discovery or an employee action, it has the right to do so, even with employee-owned devices, he said. But these policies don't fly in Europe, where personal data privacy laws are stronger.

IT disaster recovery in the age of mobility
So far, however, mobile devices are not really factoring into a CIO's IT disaster recovery and business continuity strategy, experts say.

"We don't have any real data on mobile devices and disaster recovery, because it is an area that no one is paying attention to," Ritter said. "We are not seeing people thinking it through to the step where they recognize that these devices are becoming walking computers."

A disaster recovery plan for mobile devices is not on most CIOs' radar, IT consultant French said. "I don't think too much about mobile devices and DR, because CIOs are not worrying about it," he said.

The same goes for players in the fast-growing MDM market. "The intersection of DR and mobile hasn't yet been a big topic I have heard from enterprise customers, although I think it is right around the corner," said Bob Tinker, president and CEO of Mountain View, Calif.-based MobileIron Inc.

The mobile industry tends to focus on the device rather than on the management and security of the applications on the smartphone, Tinker said. "The key thing for CIOs is that it's not about the device, it's about the data."

Top-down management of mobile devices a thing of the past
The lack of awareness is understandable. When company-issued laptops, BlackBerrys and yesterday's cell phones represented the bulk of mobile devices in use at companies, CIOs could confidently say that IT disaster recovery and business continuity for their mobile arsenals was no big deal -- provided, of course, they had solid plans.

Research In Motion Ltd. offered decent disaster recovery with its BlackBerry Enterprise Server. With other so-called ruggedized devices (a Windows phone for instance), the data typically was synced to some centralized server. When a cell phone got lost or stolen, it didn't much matter, except for the pain of rekeying in phone contacts.

Not so long ago, when the issue of disaster recovery and mobile devices came up, the conversation was assumed to be about how organizations could take advantage of employee cell phones and the handful of executive not-so-smartphones to instruct and inform personnel in the event of a disaster. The advent of the iPad and other mobile devices that not only access data but also can be used to generate and store data, means that disaster recovery plans now have to consider them as endpoints.

Consider the caseload of Atlanta-based MDM vendor, AirWatch LLC, which supports the spectrum of mobile platforms, from the Apple iOS to Symbian. In January alone, the company worked on three cases involving business executives losing a personal iPad that held sensitive corporate data and lacked the security software to wipe it clean. One iPad, left behind by a CEO in a backseat pocket on an airplane, contained notes on a top-secret acquisition.

"This is not a classic example of disaster recovery, where a catastrophe brings down a data center. But let me tell you, this is a disaster that has to be dealt with," said AirWatch Chairman Alan Dabbiere.

Enterprise mobility driving desktop virtualization
One of the ways companies are dealing with IT disaster recovery and business continuity for mobile devices is by investing heavily in desktop virtualization, Nemertes Research's Ritter said. "You can still get to the desktop and even edit a Word doc on the device; but technically, all that is going on in the data center. The device is only a remote client."

Another approach is focusing on "secure containers," products offered by such MDM vendors as AirWatch, Good Technology Inc. and BoxTone that address the security issues posed by the errant iPad.

"This is not disaster recovery in the way we usually talk about it, but security. Security is the biggest risk factor in deciding which mobile devices to allow onto the corporate network," Ritter said.
Executives are dragging documents through iTunes and onto their iPads. They are editing them with something like QuickOffice … [and] nobody is thinking about how to get them back.

"Rather than focusing on trying to back up mobile devices, what we have seen organizations do is restrict the amount of data that can be downloaded as much as possible," Ritter said. So, if the device supports Microsoft's ActiveSync, for example, the employee can access email but will be blocked from accessing SharePoint and other servers holding corporate data, he said.

That is pretty much the approach taken by The Vanguard Group Inc., the Valley Forge, Pa.-based investment firm, said Abha Kumar, its principal for IT. Employees are given the option of using a company-issued BlackBerry or the smartphone of their choice.

Nothing is stored on the personal device, Kumar said. "We provide a pipe [using software from Good Technology] into our email and calendar at this point, so the device is secure from that point of view," she said. "There might be something on the cache that holds data, but as soon as we find that a person has lost the device, we can zap the application."

With their company-provided BlackBerry, Vanguard crew members, as they are called, can access their work email, calendars and some business applications, such as Vanguard's Siebel customer relationship management application and the company intranet.

"If a crew member submits an expense report, I can approve it on my BlackBerry," Kumar said.

And, being a regulated business where security is paramount, client data is off-limits to mobile devices. Vanguard client service reps, who routinely deal with client information, do not have BlackBerrys because Vanguard does not want client information to go outside its four walls. "So, even as we talk about new technologies and being more flexible and being more mobile, the thing we protect above all is client information," Kumar said.

Brownlee Thomas, principal analyst at Cambridge, Mass.-based Forrester Research Inc., agrees that most companies do not have a formal mobility policy, never mind a disaster recovery plan for mobile devices.

"They have lots of policies, because mobile, fortunately or unfortunately, is not a centralized provisioning at most companies. It is either provisioned at the division level or through corporate procurement, the same people buying and dispensing your staplers," Thomas said.

"The CIO doesn't necessarily have a lot of control."

Cloud: IT's future, or a mere hyperbole?

By Sunil Chavan, Director, Software Group & Cloud solutions, Asia Pacific, Hitachi Data Systems | Mar 25, 2011

 Many tout cloud computing as the standard way companies in the future would operate their IT shops. But are many of these claims exaggerated to push cloud out into the market?

It hasn't taken long for customers to recognize the benefits of cloud for on demand IT delivery, the most obvious being significant cost savings. Organizations generally over-purchase to deal with fluctuating storage and resource requirements, leaving them with underutilized hardware assets. Cloud's ability to grow and contract storage resources according to business needs minimizes this upfront capital expense.

Customers can reduce their operational expenditure by deploying cloud models, paying only for what they consume and eliminating day-to-day management tasks.

While the industry strives for on-demand access to computing, it also poses a risk for IT organizations, and can be very disruptive to business process and control. IT organizations should consider developing an internal cloud-enabled architecture to provide greater business agility, and a process for projects that need to be outsourced to a public or hybrid cloud provider. This way they can move into the cloud in a controlled, programmatic fashion, mitigating any associated risks.

There are three deployment choices to consider for moving into the cloud – private, hybrid and public. It is critical to start your cloud initiative with the right deployment model.

Put simply, a private cloud is cloud-enabled infrastructure within the physical walls of a data centre. A private cloud can provide many of the benefits of cloud without the security risks associated with public deployments. Because it's accessed over an internal network or intranet, it's as secure as the rest of your data. Control over a private cloud and its surrounding environment (networks, servers, etc) can help you can achieve enterprise level SLAs, but at the expense of operational cost savings such as physical floor space, power, and cooling, and possibly management overhead. A scaling storage platform like the Hitachi Virtualized Storage Platform (VSP) is key to successful private cloud deployment for storage.

The hybrid or trusted cloud resides at a trusted service provider. Access is limited to appropriate resources at your organization and delivered over a virtual private network or secure internet connection. Since the infrastructure is out of the organization's direct control, service levels could be impacted by external factors. Customers should consider the physical security of the environment, which is why it's important to understand the service provider's process and requirements around physical access.

Lastly, the public cloud is similar to the hybrid, except that there is usually more general access over the internet, providing limited security. Many public cloud offerings are very inexpensive or free, and SLAs are generally not guaranteed or measured against different standards. Additionally, value added services and features such as encryption, compression, backup, tiering and replication are usually not available from public providers.

Regardless of the type of cloud, there are some key features every cloud platform should have. Look for a scalable, multi-tenant object store which has REST interface, with value added features like compression and single instancing to improve cost savings, encryption to provide greater security and billing and chargeback for organizations, or service providers that wish to bill each business unit or organization based on consumption.

Use a phased approach to identify the most appropriate cloud platform. Start by identifying the data in your environment that generally has lower business value and lower SLA requirements, such as data types like home directory shares, static data or backup content that can be moved from on-site "primary" to cloud "secondary" storage.

Moving this peripheral data will help you minimise administrative and management overhead, and provide immediate cost savings. Organizations need IT agility to maintain their edge in today's competitive market. To achieve this, cloud promises an on-demand service model that can support your business needs today, while providing a solid foundation for the data center of the future.

Aston Martin relies on Siemens PLM system to standardize collaboration

By Enterprise Innovation Editors | Mar 25, 2011

 Siemens is making it easy for sports car maker Aston Martin through its PLM software, deployed company-wide to improve productivity, processes, and collaboration in their global organization.

The growing sophistication of design and development has forced numerous automotive original equipment manufacturers (OEMs) to evaluate the systems currently in place to ensure they are using best-in-class technology. Aston Martin began its extensive and ongoing competitive PLM technology evaluation two years ago. The decision to migrate from its current technology to NX and Teamcenter is critical to its mission of continuing to produce some of the world's greatest sports cars.

"The automobile industry is undergoing enormous transformation both in terms of technology and business operations. The increasing complexity of vehicles and changing economic conditions are forcing automakers to re-evaluate their existing PLM applications to align with the best available in the market," said Sanjeev Pal, PLM analyst at IDC Corp.

"Luxury automotive manufacturers like Aston Martin must make their product decisions earlier and more efficiently in today's marketplace. We are pleased that our technology has been selected for the advanced product planning through detailed engineering processes which are critical to increased productivity," said Chuck Grindstaff, president and chief technology officer, Siemens PLM Software. "It's truly a must for OEM's to be able to manage increased sophistication across all systems in a car to ensure quality while reducing time to market."

Aston Martin's decision to adopt Siemens PLM Software's technology as the corporate-wide PLM standard highlights the importance of an open PLM environment to enhance innovation and manage increased sophistication. It also demonstrates the scalability of Siemens PLM Software solutions to match the diverse deployment requirements of the full spectrum of automotive OEM's.

Data center disaster recovery? Just teleport it!

By Laura Smith, | Oct 12, 2010

Movements in the virtualization space have enabled a host of functionalities and benefits for the enterprise, but one key advantage is being able to easily transport whole data centers to operate from one site to another, in case disaster strikes.

Of course, this kind of protection at the data center level requires fast networks and pricy hardware. Not every application can justify the cost -- and that's why experts, such as Haletky and Ray Lucchesi, president of Silverton Consulting Inc. in Broomfield, Colo., recommend a mix of virtual disaster recovery solutions. Among the questions CIO need to answer are these: How fast do I need to be back up and running? How many virtual machines (VMs) do I want to back up? Can I back up 1,000 VMs in my backup period?

Such features as high availability, fault tolerance and continuous data protection are built into virtualization hypervisors for business continuity already, but a new raft of VM-aware software products make backing up data even easier. To reduce the amount of data travelling from the data center to a hot site, most backup software includes some deduplication capability. A few vendors, such as PHD Virtual Technologies Inc., also perform source deduplication, which transfers even less data over the network. Vizioncore, a Quest Software Inc. company, has a similar product that also does change block tracking to eliminate the backup of null data (0s or unused). Veeam Software Corp.'s products have capabilities similar to those of PHD and Vizioncore, as well as an automatic boot test to ensure the recovery system is working.

Consolidation leads to new disaster recovery solutions
Most enterprises have bought into a backup and recovery solution already, but some are willing to rearchitect, given the advances in virtual disaster recovery solutions. For Jon Nam, director of technology at Macy's Merchandising Group Inc. (MMG) in New York, a data center consolidation is giving the company an opportunity to map out a new disaster recovery and business continuity strategy. The department store chain is consolidating its stores' and headquarters' data centers to two main sites, one in New York, the other at Macy's Cincinnati headquarters. Those two sites will be virtualized using Palo Alto, Calif.-based VMware Inc.'s ESX product and act as hot sites for each other.

Macy's is debating whether to own or rent these data centers, "looking at what it costs to maintain the sites, as well as overseas locations," Nam said. "Since the consolidation, a lot of our business practices and strategies have changed. We are re-analyzing every application, server role and admin, determining who's going to manage it in case of a disaster." As in every business, it always comes down to cost. Macy's might pay someone for drop shipments and to have a backup site operational 24 hours. "At MMG, even if we're down for three or four days," it's not a disaster, he added. "Obviously our product line is designed and sourced nine months ahead."

As Nam is finding out, there are several options for backing up virtual environments to a hot site. Hardware-centric SAN arrays require some scripting to partially automate a failover, according to Virtualization Practice's Haletky and Silverton Consulting's Lucchesi. Even then, the hot site will inherit a crash-consistent copy of the machine as it was at the start of the failover. To deal with this problem, VMware developed Site Recovery Manager (SRM), a layer of software that ensures the replicated data on a hot site is a good, non-crash-consistent copy of the VM, Haletky said.

However, SRM takes about an hour for the hot site to recover production software and business processes, according to Lucchesi. "What's missing in the site recovery space is access to cluster managers," he said. "Geo-clusters are used to provide instantaneous site recovery, and obviously cost more money. VMware's SRM is not in that ballpark," he said.

As part of their virtual disaster recovery solutions, CIOs should determine which applications are mission-critical, and put in place a mix of technologies. "VM [disaster recovery] does not have to consist of only one approach," Lucchesi said. "Automated failover may well be limited to only a few critical virtual machines, with the rest relegated to less-automated recovery."

Disastrous business continuity statistics
Enterprises that already have virtual disaster recovery solutions might want reconsider their plans, given the results of a recent survey by Vernon Hills, Ill.-based technology services provider CDW LLC, of organizations that suffered significant network disruptions in the last 12 months. Of the 200 IT managers at medium-sized and large businesses it surveyed, 82% said that before their disruptions, they were confident their IT resources were prepared to support local business operations effectively in such an event of a disruption. A quarter, however, experienced disruptions that lasted for more than four hours. Based on that survey, CDW estimates that such network outages cost U.S. businesses about $1.7 billion in lost profits.

The survey also indicated that 82% of the most significant network disruptions that U.S. businesses experience -- due to loss of power, hardware failure or loss of telecom services -- could be reduced or avoided by implementing measures in any comprehensive disaster recovery and business continuity plan.

Virtualization as a disaster recovery plan? Possible, but not ideal

By Christina Torode, | Mar 25, 2011

In the wake of recent disasters, CIOs are scrambling to ensure their companies are ready when disaster strikes. But is virtualization, in any way, helping make recovery during disasters possible?

Server and desktop virtualization projects are under way at Christus Health to meet business goals that range from more flexible access to data and less power consumption, to electronic health care regulations and disaster recovery planning.

"We were hit by hurricanes that caused major outages in our organization. Now we're building a client computing model that allows a physician at a hospital that went down to pick up a satellite phone or whatever is at hand, and get immediate access back to our infrastructure," said Todd Bruni, director of client computing services and configuration management for Christus Health, a health care company with 30,000 employees and 40 hospitals and affiliated facilities.

If a hospital loses power, employees or physicians remain tethered to the company's primary or backup disaster recovery facility because Bruni's team has been steadily virtualizing all client devices using virtualization technologies from Citrix Systems Inc. The first phase of the project was the introduction of Citrix-based server-based computing to host applications in the data center. The second phase was moving about 10% of the application portfolio (which covered approximately 50% of employees' data needs) off desktops and into the data center -- using thin clients as the front end and Terminal Services on the back end. The stage under way now is the build-out of a virtual desktop infrastructure (VDI) for more complicated clinical scenarios, such as access to medical records and to back-end financial systems.

"These are solutions that were not well built or intended for a server-based computing model or Terminal Services, so we needed VDI," Bruni said.

Virtualization by no means replaces a full-fledged disaster recovery plan -- Christus Health's data is replicated in "hot, hot" scenarios between its primary and secondary disaster recovery facilities -- but virtualization simplifies real-time replication and data portability.

"Virtualization is making it possible for our client services to be portable in case of a disaster," Bruni said. "All you need is an agent on any client device, and some type of Internet access."

Core business apps running on a virtual server infrastructure, "allows for portability and replication that we wouldn't have had with dedicated physical systems," Bruni said.

Weighing the costs and benefits of VDI
A VDI is costly, however, as Chelo Picardal, chief technology officer for the city of Bellevue, Wash., found out when she started investigating desktop virtualization for 1,500 employees in 13 departments. "Server virtualization was an easy sell because you're replacing the cost of buying physical servers anyway," she said.

"With virtual desktops, you still have to buy PCs for people, but now you also have to buy the virtualization software and invest in an infrastructure that will hold all the data that used to be on the desktops -- where is that funding going to come from?"

Picardal does not see desktop virtualization as benefiting the city's disaster recovery strategy, but views it instead as an "efficiency" play for the IT department. "You can give remote workers access to their data, but we are looking at it more as an efficiency gain in terms of maintenance."

Ask her about the disaster recovery benefits of server virtualization, on the other hand, and Picardal has a checklist readily available:
• Workloads are easily portable from the primary to the secondary disaster recovery site, and users experience no downtime.
• Virtualization eliminates the need to buy double the hardware to replicate physical servers between the two facilities. This reduces costs, and reduces drift and hardware compatibility problems between the primary and secondary facilities. That in turn reduces downtime.
• Applications that need to be highly available remain that way when a failover to an alternate site occurs.

"When you think about high availability, the VM [virtual machine] becomes the point that fails over," said Chris Wolf, analyst with Stamford, Conn.-based research firm Gartner Inc. "That's a really big deal because traditionally, enterprise IT could cluster only a small percentage of apps for high availability because that type of architecture had to be written into the apps. Whereas with virtualization, any application can be made highly available and resilient to hardware failure."

Above all, however, Bellevue's Picardal can guarantee her performance service-level agreements (SLAs). "For a long time, there were a lot of things we couldn't promise that the customer really wanted. The best we could do is get them back up maybe in a half-hour in a disaster scenario. Now, with server virtualization, unless the entire [data center] facility goes down, the customers don't even notice it."

With the city's VMware Inc. server virtualization technology tied to its storage area network, which has deduplication, "you can get really close, or exceed what the customer needs," Picardal said. "Let the customer drive your DR needs, and you'll find that virtualization really allows you to meet those needs fairly easily," she said.

The city's public-facing applications, which have a high-availability SLA, can be backed up and returned to service with minimal downtime as a result of virtualization. That was the case when one of the city's websites was defaced, Picardal said.

Testing a disaster recovery plan made easy
Testing a disaster recovery plan is perhaps one of the most painful tasks an enterprise IT department faces. The process is so complicated and demoralizing that some departments have been reduced to just reading the disaster recovery plan's documentation and checking a box stating they are prepared for a disaster, Gartner's Wolf said.

"I've seen companies just quit testing disaster recovery because it was bad for morale. They would run into so many problems trying to recover data, application and hardware in the DR facility because the hardware wasn't an exact match; and it would often take the IT staff days to get through the DR exercise," Wolf said.

Virtual machines, however, remove the necessity that hardware -- from devices to the firmware on them -- be an exact match between the production facility and disaster recovery facility. "It's so easy to validate that an application is going to come online in a VM, and test that regularly," Wolf said. "That's generally not an option with physical hardware."

Because disaster recovery testing is simple to do in a virtual environment, many enterprises aren't testing just Tier 1 applications, but now are moving down the line of business applications to test their ability to bounce back from a disaster, Wolf said.

Because VM environments are easy to isolate, "you can do recovery testing to your heart's content without having any impact on the production environment," said Nelson Ruest, principal at consultancy Resolutions Enterprises Ltd. in Victoria, British Columbia. "Recovery testing is as simple as changing a [network interface card] that is assigned to a VM."

The not-so-simple part
With server or client virtualization, overall systems maintenance and recovery are simplified. Workloads, whether they're on a server or client, are isolated from the underlying hardware and can be moved from one system to another, from one facility to another. In addition, most virtualization technology has disaster recovery capabilities built in to automate and prioritize the system recovery process.

This could free up IT from performing a few steps in disaster recovery, but many of the procedures needed to back up and maintain systems remain.

"With server virtualization, we gain high availability at a lower cost, but we still have to patch, monitor and troubleshoot -- that doesn't go away," Picardal said.

In addition, if you do choose to deploy virtual desktops, don't think it will be as easy as your server virtualization project. "With server virtualization, you worry about CPU cycles, memory, disk, network connectivity -- the same things you did before," Christus Health's Bruni said. "In the client [virtualization] space, you have to worry about screen shots, latency on circuits and whether that causes flash video not to perform appropriately. There are a lot of things that run on a desktop that never used to run in a data center [that now do]."

The tradeoff? Peace of mind, Bruni said. "The core benefit [of virtualization] back to the business is knowing that they have multiple ways of accessing data, services or applications … [because] the core infrastructure is designed to ensure that core services remain available."

First Blood: Japan's Quake to Hit Insurance Market

by James Eck and Kenji Kawada, Moody's Investors Service, 16 March 2011

On 11 March, a massive 9.0 magnitude earthquake struck off the coast of northeast Japan, approximately 130 km east of Sendai and 373 km northeast of Tokyo. The earthquake, the largest ever recorded in Japan, triggered a powerful tsunami with waves up to 10 meters high in the hardest-hit Miyagi prefecture, resulting in more than 10,000 dead and missing persons and causing vast destruction to property and infrastructure.

Preliminary estimates of insured losses are US$15 billion to US$35 billion, and could rise as losses from the tsunami, which are not yet fully incorporated in the range, are refined. These losses will fall to insurance and reinsurance companies in Japan and around the world, resulting in negative credit implications for both sectors.
The ultimate amount of insured losses from this event, as well as the market participants that will bear them, will depend on the types of coverage provided (residential earthquake risks are covered by a government reinsurance program, while commercial risks are not), the amount of reinsurance purchased, and the structure of reinsurance programs.
An additional wildcard is the potential for business-interruption losses, which are influenced by damage to power and transportation infrastructure. We believe that estimating claims will be protracted process, as the size and scope of the event will place significant strain on insurers' claims adjustment resources. Moreover, aftershocks could last for weeks, causing additional insured losses.
We expect the following insurance market participants to be most affected: Japanese domestic insurers, Japan Earthquake Reinsurance Co. Ltd., international insurers, global reinsurers/Lloyd's market, retrocessionaires and catastrophe bonds.
Japanese Domestic Insurers
The Japanese non-life insurance market is highly concentrated, with three groups accounting for around 85% of the Japanese market on premium income from fire (including earthquake) insurance:
  • MS&AD Insurance Group (insurance financial strength rating (IFSR) Aa3 stable for Mitsui Sumitomo Insurance and A1 stable for Aioi Nissay Dowa Insurance)
  • Tokio Marine Group (IFSR Aa2 stable for Tokio Marine & Nichido Fire Insurance)
  • NKSJ Group (IFSR Aa3 stable for Sompo Japan Insurance)
Earthquake coverage is not included in standard fire policies in Japan and must be purchased separately for most residential property exposures. Earthquake insurance covers damage from earthquake, fire following earthquake, volcanic eruptions, and tsunamis.
Residential earthquake risk is ceded by insurers to Japan Earthquake Reinsurance Co., Ltd. (JER). However, Japanese domestic insurers take back a portion of losses incurred by JER above ¥115 billion (US$1.4 billion), with maximum exposure of ¥593 billion (US$7.2 billion).
In addition, kyosai, or cooperative insurers, such as Zenkyoren (not rated), also provide coverage to homeowners (where earthquake risk is included in fire policies), but are not required to participate in the government backstop. To manage their earthquake exposures, kyosai are large reinsurance buyers in the private market.
We expect commercial and industrial losses to fall heavily on domestic companies (at least on a gross basis), as they account for approximately 94% of the non-life insurance market in Japan. Foreign insurers earn just 6% of premiums. However, a meaningful portion of commercial losses will likely be passed on to the global reinsurers.
Japan Earthquake Reinsurance Co., Ltd.
This entity, jointly owned by private P&C insurance companies, assumes residential earthquake exposure from domestic insurers and provides up to ¥5.5 trillion (US$66.9 billion) in claims-paying capacity.
Losses above ¥115 billion (US$1.4 billion) are shared with domestic insurers and the Japanese government at various levels of co-participation as loss levels increase beyond the JER's first-layer retention. JER's maximum exposure is approximately ¥606 billion ($7.4 billion).
International Insurers
The largest global insurers, including ACE Limited (IFSR Aa3 stable), Chartis (IFSR A1 stable), Allianz (IFSR Aa3 stable) and Zurich (IFSR Aa3 stable) write commercial lines business in Japan, but as mentioned above, have only a small market share. These companies also utilize reinsurance to manage exposures.
Global Reinsurers/Lloyd's Market
We believe a significant amount of losses will flow to the global reinsurance industry (including various Lloyd's syndicates), as catastrophe reinsurance covering Japanese earthquakes is a large market (for both commercial risks and residential risks from kyosai).
We expect the largest global reinsurance players to report the highest losses on a nominal basis. These players include:
  • Munich Re (IFSR Aa3 stable)
  • Swiss Re (IFSR A1 stable)
  • SCOR (IFSR A2 positive)
  • Hannover Re (not rated)
  • Berkshire Hathaway (IFSR Aa1 stable)
  • PartnerRe (IFSR Aa3 stable)
  • Everest Re (IFSR Aa3 stable)
For losses as a percentage of equity capital, we expect significant differentiation among companies as certain firms are not heavily exposed to Japanese risks.
As the supply of retrocession coverage (where one reinsurer buys coverage from another reinsurer) has increased over the past couple of years, prices have come down, encouraging some reinsurers to buy more protection from retrocessionaires. This coverage is often bought on a collateralized basis from privately held reinsurers and hedge funds.
Given the magnitude of Japan's catastrophe, it is possible that attachment points for such coverage could be breached, resulting in losses being passed to the retrocession market.
Catastrophe Bonds
Ten catastrophe bonds outstanding with approximately US$1.7 billion in par value include Japanese earthquake risk as a covered peril. Most of these bonds were sponsored by global reinsurance companies to hedge their risks, including Munich Re, Swiss Re, SCOR, Flagstone Re (IFSR A3 negative), and Platinum Underwriters (not rated).
These bonds are typically exposed to losses beyond a certain triggering threshold, which can include actual insured losses sustained from an event or a parametric trigger, where the reinsurance payout is based on the location, magnitude, and depth of an earthquake.
Impact on Reinsurance Pricing
While the area most heavily affected by the earthquake and tsunami is not densely populated relative to other areas in Japan, this event underscores the immense risks earthquakes pose to life and property in densely populated and industrialized regions. If the epicenter of this earthquake had been closer to Tokyo, the death toll and economic and insured losses would have been many multiples higher.
This event could stabilize reinsurance pricing in the months ahead as reinsurers' capital (and capacity) has been reduced by a string of catastrophic events during the past six months, including two earthquakes in New Zealand, floods and Cyclone Yasi in Australia, and now, the largest Japanese earthquake since record keeping began.
We anticipate strong reinsurance market conditions in Japan for 1 April renewals (although some treaties may have already priced), as well as in Australia and New Zealand, where renewals occur at mid-year. Upcoming June renewals for Florida hurricane reinsurance could also experience firmer conditions.
We also expect share repurchases among global reinsurers, which were heavy during 2010, to grind to a virtual halt as firms preserve capital for the upcoming Atlantic hurricane season.
About the Authors
James Eck is Vice President – Senior Credit Officer and Kenji Kawada is Vice President – Senior Analyst at Moody's Investors Service. This article first appeared in Moody's Weekly Credit Outlook released on 14 March 2011.

China Set to Take India's IT and Outsourcing Services Crown

by Enterprise Innovation Editors, 25 March 2011

The playing field for global delivery is seeing increased competition coming from the tiger (India) and dragon (China) cities as both countries are keen to create a long-term sustainable competitive advantage. However, IDC predicts that by 2014, Chinese cities will emerge as the most ideal global delivery locations.


IDC's latest Global Delivery Index showed that the government of China continues to build heavily on its foundation of the 1000-100-10 initiative rolled out in 2006, with 22 cities now identified as software and outsourcing locations.


As the battle for supply of global delivery heats up, the investments made by the government in China to promote "Triple Play" --a convergence of fixed line, cable and broadband services and cloud technologies, particularly in Shanghai and Beijing-- will tip the global delivery scales in favor of the Chinese cities by 2014, IDC said.


Meanwhile, India, with an abundance of engineering graduates and experienced professionals, will continue to dominate in the talent forte. The local governments in India are investing heavily in technology schools to churn out graduates with the right skill sets to meet market needs.


"Infrastructure investments, particularly in terms of key technologies, will drive foreign direct investment, which in turn will bring with it foreign talent. If this talent is capitalized upon and can be used to train locals, there will be experienced talent available in China within the next five to seven years, placing the dragon in a dominant position on the global delivery map," says Suchitra Narayan, Research Manager, Services, IDC Asia-Pacific.


"India cannot rest on its laurels and rely on its existing differentiators of 'low cost' and 'availability of talent'. It is currently in a position to capitalize on the best practices and years of experience to build out/automate future technologies and solutions that may stand it in good stead for the future. Strategic growth and investments are key for future dominance," Suchitra adds.


Top BPO Destination


With China's bullish approach into its outsourcing market initiatives, it can also overtake India as the top BPO destination of choice in the world, but there's no room to be complacent.


In a new report, the independent technology analyst claims that even though the industry itself continues to grow, India's share of the market is in decline and China is already providing substantial competition as the world's outsourcing powerhouse in terms of footprint, awareness and capability.

"Both countries are often touted as the low cost delivery location of choice, with many non-domestic vendors investing millions of dollars to set up operations in multiple locations in both countries," says Jens Butler, Principal Analyst. "Add to this China and India's growing influence as global centers of political and economic power, and it appears to be a two horse race to the finish."


However, according to Ovum, each market must be considered as a delivery location on an individual contractual demand basis rather than as a broad brushstroke, as each has socio-, political- and economic-competitive and comparative advantages.


Stability and governmental influence will play a key future role, with China's five year plans and the associated federal infrastructure investment and India's province-led support for its home-grown free-market organizations. And the focus on such investment and direction will need to continue, as there are a host of up-and-coming alternatives to these current "magnets" of the services delivery world (such as the Philippines, South Africa and Latin America). "These locations may not compete from a pure scale perspective, but they may well continue to extract market share from both, just as China as continues to take share from India," concludes Butler, based in Sydney.

Is It Time to Leave China for Vietnam and Other LCCs?

Is It Time to Leave China for Vietnam and Other LCCs?

by Cesar Bacani, 22 March 2011

When Premier Wen Jiabao declared at the National People's Congress on March 5 that China will continue boosting domestic consumption to rebalance the economy, local and foreign companies knew exactly what would come next.

Sure enough, the prime minister went on to say: "We will steadily increase the minimum wage of workers, basic pensions of enterprise retirees, and subsistence allowances for both urban and rural residents." This approach is meant not only to put more money in pocketbooks and thus encourage consumption, but also to help bridge the rich-and-poor divide and ensure social stability.
But the rise in minimum wages will, of course, add to the cost of doing business. MNCs and other enterprises in China are already getting hit. On January 1 this year, the Beijing city government increased the statutory monthly wage in the capital to RMB1,160 (US$176), up 21% from RMB960, which was mandated only in June last year and represented a 20% rise.
All the country's provinces and municipalities have announced plans to raise their minimum wage in 2011. Xu Xianping, who is deputy director of the National Development and Reform Commission, reports that five regional governments have promised to raise income growth above GDP growth while 19 others plan to raise wages at the same pace during the 12th Five-Year Plan (2011-2015).
How High Can Wages Go?
It's all unsettling news to the legions of global companies that use China as production base. Rising minimum wages will surely raise their labour costs. "We're going to see continued increases in wages for privately owned companies . . . at around 17% annual growth in the next three years," says Jonathan Wright, Senior Executive at Accenture, the global management consultant.
This, even as wages in Vietnam, India, Indonesia and Asia's other low-cost countries (LCCs) remain stagnant or rise more slowly. According to Accenture, the Chinese average hourly rate in U.S. dollar terms is now far higher than in other LCCs (see chart). The projected increases in the minimum wage this year, coupled with the strength of the renminbi against the greenback, are expected to bring the average hourly wage in China past the US$2.50 level – more than twice wages in Vietnam, whose currency has just been devalued against the U.S. dollar.
Average hourly wage, China vs. Vietnam, Indonesia and India (US$ per hour)
Click chart to enlarge

But Wright argues that MNCs in China don't have to pull up stakes and go to Vietnam and other LCCs instead, although they can consider a 'China-plus' strategy in which some production can be done in another country.
"These increases can be managed," he says, pointing to the findings of Wage Increases in China: Should Multinationals Rethink their Manufacturing and Sourcing Strategies? [17], a new Accenture study that analysed wage trends in three industries – apparel (footwear), heavy equipment, and high technology (personal computers) – and plotted how future salary increases will affect profit margins.

Sensitivity Analysis
As expected, Accenture tracked a steady increase in average hourly wages in the three Chinese industries over the past six years (see chart). The increase has been steepest in high technology and heavy equipment, particularly in 2009 and 2010. Wage increases in apparel, which typically requires less sophisticated skills, have been more moderate.
Average hourly wage in China by industry (in US$ per hour)
(Click chart to enlarge)

Are higher labour costs reason enough for MNCs to leave China? "There's no need for urgent and drastic change," says Wright, at least if the motivation for leaving China is simply the rising salaries.
After conducting sensitivity analysis on labour costs and profit levels in China's apparel, heavy equipment and high technology sectors, Accenture concluded that MNCs which source or manufacture 40% (the industry average is 37%) of their apparel in China will need to raise retail prices by just 0.7% to maintain profit margins – even if wages rise 30% on average from current levels.
In the heavy machinery sector, companies that source or manufacture 60% (which is the industry average) of their products in China will need to raise retail prices by 1.5%. However, companies sourcing or manufacturing most of their personal computers in China (industry average: 90% -100%) will need to increase retail prices by a much more significant 4.8%.
(Click table to enlarge)


What CFOs Can Do
But labour is just one driver. "Clearly it's not all about the labour cost," says Wright, who is one of the report's co-authors. "One of the reasons why we wanted to do this study is that executives around the world read about the wage increase and it created nervousness. We want them to realise that they need to manage it but they don't need to panic."

Wright identifies two other important drivers that will have an impact on retail prices of Chinese-made products: the cost of materials and currency exchange movements. The Accenture study, in fact, found that material cost accounts for a larger percentage of expenses (19% in the footwear industry, 21% in heavy machinery, 50% in personal computers) than does labour (3%, 4% and 20%, respectively).
"The difference is that you can hedge [material costs and currency exchange]," says Wright. "With labour costs, there's not an awful lot you can do." Still, hedging can take you only so far, and it can be very expensive. "And you can't hedge too far out," says Wright.
What can companies do? A key recommendation is to focus on operational excellence, towards the end of enhancing productivity and tightening cost management. This will help the company absorb higher labour and material costs without having to raise retail prices too much (or increase prices if they have the pricing power and thus realise higher profits).
"There are significant opportunities in China as well around the world for cost takeout and improved efficiency in manufacturing and supply chain," says Wright. "Fifty percent of supply chain cost sits in your suppliers' own supply chain, and therefore the collaboration, the ways of working between vendor and customer, become significant."
Accenture research shows that there are significant overlaps in customer and vendor inventories, original design manufacturing and other operations. "The more collaboration, the more improved ways of working," says Wright. "In the relationships they have, in the flow of information . . . there's a great opportunity to absolutely counteract the increasing labour costs [in China]."
The idea of supply-chain collaboration has been around for years, but Wright says it may be reaching critical mass. "Technology is now becoming appropriate," he says. "I think the mind-set of executives is becoming appropriate, too. The technology and this mind-set to drive collaboration mean that we are potentially at the beginning of new ways of working."  
The traditional supply chain of "buy-make-move-sell is being replaced by ecosystems of partnerships, of collaborative joint ventures, of long-term relationships," says Wright. Companies are driving a red line through their organisation to delineate what their core and non-core competencies are. Their partners, which can include outsourcing and cloud services providers, can then take over those non-core functions.  
Should MNCs pull up stakes in China and move to Vietnam and other LCCs? "You've got to look at landed cost or total cost to serve," warns Wright. The cost of labour may be lower in other LCCs, but what about their infrastructure? Can you get cotton and other raw materials from the port to the factory and the finished goods from the factory to the port in a cost-effective and timely way?
Wright argues that companies should consider China's relatively strong infrastructure compared with that in ASEAN before making a decision. "These factors should come into play as they look at materials cost, as well as the manufacturing, sourcing and supply chain infrastructures," he says.

A China-plus strategy can be devised in which the company expands to western and central China and also to other LCCs like the ASEAN countries. Beijing launched a 'Go West' campaign in 2000 that grants tax breaks and other perks to enterprises that move into those areas, where workers are willing to accept lower wages than in the eastern coastal cities since they don't have to leave home.


That's a short-term fix, though, which is why Wright also suggests finding sourcing suppliers in Vietnam and other LCCs to lessen the impact of rising labour and other costs in China. But MNCs adopting this China-plus or integrated regional supply chain strategy must be ready to deal with cross-border trade movements, product quality issues, political and economic instability and institutional voids in those LCCs.
The beauty of the China-plus approach, though, is the potential for the MNC to sell into the vibrant Chinese and LCC domestic markets, in addition to exporting its products to the West and other developed countries. "Now what is required is for organizations to understand how to leverage on increasing local demand," says Wright.
In other words, companies can turn China's labour costs to their advantage by selling to those newly empowered consumers themselves. In effect, they'll be getting back (and more) what they pay out in higher salaries in the form of increased revenues.
About the Author
Cesar Bacani is Editor-in-Chief of CFO Innovation.

When Your Company Wants to Buy Apple iPads

When Your Company Wants to Buy Apple iPads

by Angie Mak, 24 March 2011

A personal iPad or tablet of your choice — for everyone. That was what Financial Times Group CEO John Ridding told 1,800 employees last November. The company decided to hand out a US$480 bonus [15] to "encourage all our staff to be expert and experienced in using [tablets]."

It is perhaps understandable for a media company to encourage the use of the newfangled devices. The FT has developed an iPad application for its flagship newspaper and made £1 million from ad sales [16] off that app in the six months from its launch in May last year.
Then there is SAP, the German business enterprise software maker. "We have taken the view that it's the absolutely right thing to roll out a thousand iPads across the organisation, so that a customer-facing person can take this iPad and show some customer applications," says Colin Sampson, Senior VP and CFO for Asia Pacific & Japan.
"It's a very powerful message when you get in front of a customer with a very light, easy-to-carry device and say, 'Hey, look at what you can do for an executive on the run.'"
Factors to Consider
The Financial Times and SAP are among the early adopters. Many more companies are likely considering investing in tablets for their customer-facing staff and other employees – and their CFOs will, of course, be part of the decision.
Should you say yes? There are several factors to consider. Tablets are not cheap. The Financial Times is estimated to have spent US$864,000 on the subsidies. There are also issues around security, running costs, applications and features.
Don't forget the growing number of brands and models, which have different sizes, weights and technical specifications.
Then there is the question of whether the company needs tablet devices at all. While smaller and lighter than mobile computers, high-end tablets can cost more than a middle level business laptop – but have less computing power and fewer features. The iPad has no USB port or a replaceable battery, for example.
Note, too, that a tablet is not a replacement for a computer. Indeed, you will need a desktop, laptop or netbook to download, upload and update files and applications Nor can it entirely replace a mobile phone, although you can use a tablet to call other people through Skype. The two exceptions are the Samsung Galaxy and the Dell Streak, which function both as a tablet and a rather oversized phone, and do not need to be plugged into a computer to download or upload files and apps.
Still interested? We put together some of the important considerations for choosing which tablet may be the most appropriate and useful for your organization.

How Much Is It?

Tablets come in a variety of prices, depending on storage capacity (from 16GB to 64GB for the iPad), connectivity (wifi only or wifi and 3G) and other features. Below is a rundown of the models available or due to be launched in Asia:
  • Apple iPad 1 and iPad 2: US$498 to US$830
  • BlackBerry PlayBook: US$499 to US$700
  • Dell Streak: US$550 to US$705 
  • Motorola Xoom: US$799
  • Samsung Galaxy Tab:  US$600
For business use, we estimate that a company will need to pony up an additional US$900 or more per user for peripherals and services, including:
  • 3G access, 36 months (US$15/month): US$553
  • Expandable memory, MicroSD card, 32GB: US$75
  • Virtual Public Network (VPN) license, per user/year: US$50
  • Basic productivity app licenses, per user/year: US$50
  • HDMI connector (iPad): US$39
  • Security tracking service, 3-year plan: US$139

(e.g. *CompuTrace LoJack for Laptops [17])

Large and Clear
Since a big part of the tablet's appeal is its ability to show still and moving images and zoom in on tiny details, the screen size and resolution should be a key consideration in choosing which tablet to buy for companies in media, fashion, architecture and design, hospitality and hotels, luxury cars and fine wine, among others.
The iPad has a 9.7-inch screen, while the Samsung Galaxy has a 7-inch screen (the same size as the BlackBerry Playbook, which is due to be released in the U.S. and Canada on April 19). The Dell Streak is the smallest at 5 inches. The Motorola Xoom boasts the biggest screen at 10.1 inches. Samsung plans to launch a 10.1-inch version on June 6 and an 8.9-inch model "in early summer."
In terms of resolution, the iPad's is comparable to High Definition TV video. Its screen shows images at 1024x768 pixels (132 dpi), which is finer than that of the Blackberry PlayBook.
The Galaxy Tab, which is smaller in size, has a lower resolution than either the iPad or the PlayBook, at 1024x600. But the 8.9-inch and 10-inch screen models will have a resolution of 1280x800 pixels (160 dpi). The Dell Streak, again, lags behind with a resolution of just 800x480 pixels.
The champion in size and resolution is the Motorola Xoom, which boasts a superior resolution of 1280x800 pixels and a 10.1-inch screen. The Xoom is not yet available in Asia, however.

While slightly smaller and with less rich resolution than the Xoom, the iPad is the champ at scrolling and interface animation, which are important in sales and product presentations. Still and moving images of products and services appear more sleek and impressive on the iPad as a result. In comparison, the Galaxy Tab tends to stutter in its scrolling and animation.

The downside for the iPad (both version 1 and 2) is that it doesn't display Adobe Flash or Microsoft Silverlight animations, which designers often use. It also doesn't support expandable memory, nor does it have a USB port, precluding use of USB sticks to download and upload files.
This makes moving files to and from the iPad rather annoying, as all transfers have to be performed through synchronisation with its proprietary iTunes store on a computer. On the Galaxy, you can drag and drop files from the USB stick to the tablet and vice versa; you can also play Flash animations.
Tablet in the Pocket
Screen size is directly correlated with portability, which is a key consideration for certain companies. Doctors are said to be partial to the Dell Streak, for example, because its smaller size and weight (227 grams) make it compact enough to fit into a lab coat pocket.
Relative to the Streak, the two Apple tablets are heavy blocks by comparison. The iPad 1 weighs in at 680 grams, while the iPad 2 is just a bit lighter at 600 grams, about the same as the Galaxy Tab. On the other hand, those with big hands will find it easier to send email on the bigger tablets than on the Streak.
Still, the Streak's portability is a plus in the medical field (and in mining, fisheries and other industries requiring travel to remote places). Leveraging on its foothold in providing IT infrastructure to hospitals, Dell says it has designed the Streak to allow configurations that give the user quick access to the hospital's electronic medical records and lab results.
The tablet can also be configured to hold passwords and security keys. The encrypted data is stored in the cloud, housed in a data centre operated by the hospital or by Dell, which minimizes the security risks in the event the Streak is lost or stolen.
Running on Empty
One downside for the Streak: the battery charge could be exhausted in half a day of use, including talk time. It's a shortcoming the Streak shares with the Galaxy. But both devices have an external power source, so users can buy extra batteries for continued usage.
Tablets powered by internal batteries such as the iPad can be used for longer, but the battery cannot be replaced, only recharged.
Most tablets have a considerably longer battery life than laptops, making them ideal for use onboard a plane or when working away from the office for a full day. The Xoom and the iPad boast the longest runtime of the currently available tablets, at around ten hours.
The soon-to-be-released PlayBook is rumored to have 17 hours of juice [18] for surfing the Internet when fully charged.

Multitude of Tasks

Battery life, operating system, processor random access memory (RAM) and storage size are important for running word-processing, spreadsheet and presentation applications, not to say enterprise software suites like customer relationship management and financial reporting.
But because the iPad 1 comes only with 256 MB RAM, the factory-installed operating system was not able to support multi-tasking – users needed to exit an application before they could open another. If users download an upraded OS, however, they will be able to have limited multi-tasking. The iPad 2 allows limited multi-tasking, too, with a bigger 512 MB RAM (the same as the 7-inch Galaxy) and an upgraded operating system.
It is the Motorola Xoom that boasts true multi-tasking. It has 1 GB of RAM, 32 GB of internal memory (with the option to add 32 GB more via an SD card slot), a USB port, and it runs on the Android 2.2 platform with a 1GHz processor. The yet to be released PlayBook and 10.1-inch Galaxy will have 1 GB RAM and other similar features too.
Unlike the iPad, Xoom, Galaxy and most other tablets support Flash applications. This means that they can run business applications like SAP BusinessObjects OnDemand, [19] which uses a mixture of Flash and HTML.
However, for the sheer number of apps available for use, the iPad still rules. As of March this year, there are at least 350,000 apps officially available on Apple's App Store [20], both paid and free. The Google-administered Android Market, which can be accessed by users of smart phones and tablets on the Android platform, has some 250,000 apps. Additional apps can be downloaded from the just-launched Amazon Appstore for Android.
Unfortunately, not all of the apps in the Android markets can be used universally, according to some users, who say that apps meant for a particular tablet cannot be used by another Android device. Usage is further limited by which country the user is in, they add.
Would-be users of the BlackBerry PlayBook will have even more limited options – the BlackBerry App World only has some 15,000 apps. Even then, the PlayBook will not be able to download all of them because it will run on a new operating system. Research in Motion (RIM), the tablet's maker, says the PlayBook will be able to download 4,000 of those 15,000 apps when the device is launched.
Viruses and Detonators
Security, of course, is a key concern for all companies, although perhaps in varying degrees.
The Android Market came under fire recently when over 50 malicious applications were uploaded to it. According to the Wall Street Journal, "Unlike Apple Inc. or BlackBerry maker Research In Motion Ltd., Google doesn't have employees dedicated to approving apps submitted to its [Android Market] store [21]."
Google says that apps that violate its policies are removed, but while it has some internal measures to identify offenders, it "largely relies on users to rate apps and raise the alarm about any problems with them," reports the Journal.

Because it doesn't have external ports, the iPad is protected from viruses that may be uploaded via USB flash drives and SD cards. Users of the other tablets will have to be vigilant about the provenance and content of the thumb drives and cards they plug into their devices, including would-be buyers of the PlayBook, which will have a microUSB port.

All of the tablet models allow users to set a password, although some may not activate this feature because of the inconvenience. Most devices also allow the user to remotely 'detonate' their contents in case of loss.
The iPad, for example, has a Remote Wipe feature. So long as the owner has a MobileMe account, the tablet can be instructed to erase the information it contains and to reset the device to its original factory settings. If the iPad is later found, the same MobileMe account can restore email, contacts, calendars, and bookmarks.
The BlackBerry PlayBook touts more security features. It will work in tandem with a proprietary BlackBerry Enterprise Server (BES) and Network Operations Center to make sure that data sent to and from the tablet will be encrypted in transit and cannot be decrypted outside of the corporate firewall.
RIM claims that the tablet's inbuilt Bluetooth device will allow the tablet to connect with a variety of devices in a highly secure manner. In addition, files handled or viewed on the device are only temporarily cached on it, and are automatically cleared once the tablet is closed.
Which Tablet?
So which device is most appropriate for your company? It will depend on how you envision the tablet will be used for. The iPad, for example, is the current tablet of choice among fashion and other lifestyle enterprises, which are apparently attracted by its sexy design, fine resolution and top-notch scrolling and animation.
Thus, fashion icon Burberry gave iPads to its in-store customers, allowing them to immediately buy the label's new collection of clothes during a private viewing at London's Fashion Week. Burberry stores in 16 countries will now offer customers the use of an iPad to browse for clothes.
That's not to say that the iPad is the only option for customer-facing devices. When the Xoom and the 10.1-inch Galaxy are finally available in Asia, they can certainly be considered for sales, marketing and presentation purposes.
If portability and communications are the priorities, then the Dell Streak and Samsung Galaxy phones-cum-tablets are probably the best bets. As an aside, note that it is only the Streak that is able to support 4G (similar to broadband-speed connections but through a cellular network) at this time, although most tablets promise 4G capabilities in the future.
For companies that wish to give executives access to business applications software on the road while making sure security is not compromised, the PlayBook may be the more robust option.

No, Thanks

What if the decision is not to equip staff with tablets at all? Some companies may decide that presenting the actual products, rather than a digitised version, is the way to go. Others may not want the hassle of maintaining yet another slew of IT devices, in addition to servers, desktops, laptops, smart phones and other existing devices.
That will still not get companies off the hook. Technology research provider Gartner advises executives to seriously evaluate the capabilities of tablets within their organisation, as the new devices can potentially be disruptive to the business models and markets of many enterprises.
Since individuals are willing to buy iPads and other tablets and use them in the work environment, regardless of their company's existing IT infrastructure, enterprises should be ready to support them, Gartner argues.
"Organizations need to recognise that there are soft benefits in a device of this type in the quest to improve recruitment and retention," adds Stephen Prentice, Gartner Fellow and Vice President.
"Even if you think it is just a passing fad," he asserts, "the cost of early action is low, while the price of delay may well be extremely high."
About the Author
Angie Mak is Online Editor at CFO Innovation