Head in the Clouds

ERP analyst Derek Singleton posted a paean to the SaaS variant of cloud computing. With his head firmly in the clouds looking for rainbows covering five points, Derek does trip over one salient feature of modern computing—the user experience should be the focus of vendor efforts (everyone has feature / function these days). For Derek, here is some ground level perspective on his key points.

First, he notes that cloud companies “have momentum and attract great talent.”

SaaS companies have an intangible that’s working to their benefit – they’re recruiting exceptionally bright, young talent.

The same could be said of the “Dot-Coms” of a decade ago, images of a bubble economy are hard to avoid here.

His second point is the wonder of multi-tenant architecture and ”smoother scalability.” To the usual claim of overnight upgrades across an entire customer base, Derek adds a new one:

A further advantage of multi-tenancy is that company-specific customizations are left in tact when the system is updated. This is because customizations are made in metadata. For the non-techies out there, metadata is data that defines the settings and customizations for each customer, but is maintained separately from the core application code.

Multi-tenant architecture is a mixed bag for ERP installations – one swoop upgrades can cause problems for any firm covered by Sarbanes-Oxley (whether directly or through bank loan terms). The meta-data approach is indeed an appropriate development technique, but preceded multi-tenant architecture and SaaS by at least a decade. Traditional ERP vendors have employed meta-data approaches for quite a while. For example HarrisData restructured its applications to a meta-data base starting in 2000, while HarrisData’s RTI Software division began using meta-data in 1992.

The third point that the “cloud is changing enterprise software consumption” is a recycling of a three (four?) decade old argument over how to license software for enterprise use. Many pricing models exist, from the enterprise wide license, to the cpu license, to various user based license schemes (including SaaS licensing). For each pricing model there is an equivalent array of payment options from upfront cash to lease terms. The arguments and distinctions between the pricing / payment options amount to deciding whose ox is being gored at any point in time.

The fourth point on the importance of user experience is a very strong point, “great user experience equals happy customers.”

Historically, enterprise software hasn’t been overwhelmingly friendly when it comes to the user interface (UI) or user experience (UX). It’s why a standing army of consultants and professional services firms exist to help buyers customize their systems and learn how to use the software.

Too few traditional vendors take this point seriously, focusing instead on capturing the monopoly rents derived from providing the standing army of consultants and professional service people. However as with meta-data above, a focus on user experience is independent of cloud and SaaS deployment of applications. HarrisData emphasized the user experience beginning in the mid 90s, and has held training and services costs to less than 5% of revenues annually since 1996. Reducing training and services required to deploy our applications is still number one on HarrisData’s to do list.

The final point is that since cloud and SaaS is on the web, and young people “get the web,” the cloud is youthful and innovative and magically better than sliced bread. Apparently young people want

web demos, trial versions of the system and user ratings of the product.

This is hard to argue with, as young (and old) people have wanted these thing in the 80s and 90s as well. Perhaps all people would be better off focusing on the business value of various ERP options rather than the hype and fluff of analysts.

Posted in Cloud computing | Tagged , | Leave a comment

PHP on IBM i – Adding new value to old applications fast

The IBM Systems Magazine PowerUp blog is talking about PHP on IBM i, a thread started by Laura Ubelhor with her post “There’s No Doubt PHP is an Awesome Fit on IBM i“.

For the average IBM i shop, PHP is a great way to extend existing applications to the web. Laura’s post highlights success in dramatically improving efficiency of AP – using a web application to deliver the details of the remittance advice when no paper check is used. Self-service applications delivered through the web offer significant savings in reduced paperwork, eliminated mail charges, and fewer inbound phone calls. PHP allows IBM i developers to quickly deliver self-service applications to new groups of users – such as vendors, customers, and suppliers.

One HarrisData manufacturing customer used vendor self-service applications over the web to share raw material requirements plans (from MRP) directly with suppliers. The results? Suppliers were able to better manage their own manufacturing capacity, improving efficiency and reducing costs at the supplier. Further, because the suppliers could see the forecast and any changes, they were able to substantially improve on-time delivery, giving manufacturing management the confidence to optimize processes and lower costs.

Success stories like these are really based on delivering web-based applications that provide external users significant value, and only require the external users to have a web browser available. Well-designed self-service applications are available anywhere, anytime, with no training. IBM i shops can quickly and successfully extend applications to the web using PHP. The skills are easy to learn, and focused applications can be developed and deployed quickly – with the entire application stack (including the web server) on your IBM i.

At HarrisData, we made the choice years ago to move our user interface to a web paradigm, to take advantage of the how quickly and easily users took to browser-based solutions. We’ve developed comprehensive ERP, CRM, and HRIS applications using PHP to drive browser-based interfaces to our traditional RPG applications, allowing us to deliver high-impact solutions without having to completely re-write the RPG business logic that has been tested and working for our customers for decades.

Posted in PHP | Leave a comment

Is Enterprise Integration the Next Hurdle for the Cloud?

Stephanie Neil poses an interesting question on whether SaaS and Cloud solutions create as much work in enterprise integration as they solve in reduced hardware and storage management.

Enter software as a service (SaaS) applications, which might seem to be an IT manager’s dream: no server and storage systems to buy and maintain. But their emergence presents a whole new integration problem between on-premise legacy apps and those that live in the cloud.

According to the recent InformationWeek Analytics 2011 Enterprise Applications Survey, 43% of SaaS users are very happy with the ability to deploy the applications quickly, but are much less satisfied with the complexity of integrating hosted apps with on-premise systems and data sources.

To the extent that enterprise vendors (either cloud or on-premises) utilize Service Oriented Architectures and document required data structures this may seem like a small problem. However, all it takes is one key application which is not SOA and the enterprise integration problem rears its ugly head. At that point IT skills developed by integrating current applications are useful. Unfortunately in many SaaS and cloud implementations, the IT staff does not have access to the application source code necessary for successful integration efforts.

An appropriate vendor offering in today’s environment should take advantage of what the cloud offers (no server and storage systems to buy and maintain) while providing source code access IT still requires for successful implementation. HarrisData offers ERP through a Platform as a Service structure to directly address this problem. In addition to managed hardware and source code access, PaaS allows the customer to control application upgrade timing – an important consideration in the era of Sarbanes-Oxley.

Posted in Cloud computing | Tagged , | Leave a comment

Ultimate Justification for Cloud ERP

Chris Chappinelli identifies what may be the most important justification for moving your ERP to the cloud – cyber-security. Given the increasing complexity and frequency of attacks on customer and employee data, many IT departments are ill prepared to face the challenges. As Chappinelli notes even the experts get hacked (RSA Security was a recent victim).

Against the advanced persistent threats that exist in today’s networked world, even paragons of cyber-security can become victims. Wouldn’t you want to shift that responsibility to another company if you could?

Your management and business insurer would both be pleased to offload such risk. The question is how prepared are the cloud providers to accept and manage the risk? It may take time for an acceptable security standard to emerge, and even longer for security to find its way into cloud license / service agreements. However anyone contemplating cloud computing should start thinking about security today.

Posted in Cloud computing | Tagged | Leave a comment

Promise of the Cloud

What, exactly, is Cloud Computing?

Some degree of skepticism about cloud computing is understandable. The marketing hype surrounding it smacks of previous campaigns by consultants and technology companies to lure corporate investment. To wit: Y2K paranoia, dot.com hysteria and fiber optic mania.

The above quote is from an article by William J. Holstein wherein he focuses on practical business issues raised by the promise of Cloud Computing. His advice is to use the promise of cloud computing to dive in and understand the full costs and value of whatever information systems are in place in your organization today, then aggressively manage those resources to provide the greatest benefit at the least cost. When evaluating the Cloud, Holstein notes an important reality:

Defining “cloud” isn’t easy. The term does not refer to a pie in- the-sky IT heaven open to all. The cloud is distinctly proprietary.

He follows with some cautionary advice:

Of course, CEOs must tread carefully in the cloud and not take wild plunges into unknown territory.

As with any business decision (information technology should be considered a business decision, not a technology decision), carefully evaluate and uncover any risks of the alternatives before committing. Holstein points out that in many cases a Cloud vendor may handle risks better than your inhouse resources – but it is imperative that you confirm the risks are handled. He identifies security, privacy, and availability as key risks so investigate a Cloud vendors approach to encryption, failover to a remote location, and backup before committing.

But smart CEOs are using it to strip out costs and give their businesses a competitive shot in the arm—and conducting tough analyses to make sure it delivers on its promise.

Excellent advice. Remember that everything Holstein discusses applies to selecting an on-premises solution as much as it applies to a Cloud solution. Dive in and understand the benefits and risks of each approach before making your decision.

Posted in Cloud computing | Tagged | Leave a comment

Reliability of Cloud vs On-Premises Software

Lauren Carlson at Software Advice makes the point that the cloud reliability problems we have seen lately should not be considered in isolation. That is a very helpful point. She holds up the service level agreement as guarantor of cloud reliability, along with a quoted observation that cloud vendors have the latest technology while on-premises deployments do not. Does her comparison hold water?

In her analysis Lauren relies on a 2008 survey of email uptime comparing Gmail to Lotus and Exchange, where Gmail compares favorably. In this study Gmail users experience less unscheduled downtime and no scheduled downtime for about an hour more uptime per month. No explanation is provided, but assume the Gmail service makes better use of hot-site backup for scheduled/unscheduled maintenance than on-premises operations and the results back the idea of cloud vendors using better technology than inhouse data centers on average.

Uptime is not the full story of reliability. Lauren does note that the recent Google Blogger downtime event spanned 20 hours, but fails to note deeper problems beyond downtime in the Blogger event. Many Blogger users lost their blog content for several days, received poor support from Google during the outage, and permanently lost the blog comments — blogs are about conversations between the author and reader and the Blogger outage erased these. Data loss is a serious reliability problem, and one not captured by a focus on uptime/downtime.

More troubling was the cause of Google’s Blogger outage — it occurred because of a scheduled upgrade affecting all Blogger users. In effect Google’s use of the latest and greatest technology caused the downtime. Data Center Management 101 suggests testing all upgrades and holding a full backup in case of problems — Google failed to use good Data Center practices at the expense of every single Blogger user. The service level agreement was/is useless in such a circumstance.

Reliability has many factors. Uptime, data security / protection / restoration, customer service quality, new technology, and data center management practices are factors highlighted in one event. It may be that a service level agreement focuses attention on uptime over other factors, but that is no guarantee of reliability on the whole. How the user / customer and vendor work together to ensure reliability is what is important, not whether they choose to do so on-premises or in the cloud.

Posted in Cloud computing | Tagged | Leave a comment

The Slippery Slope of Software Services (Part 2)

Don’t buy software from services-driven vendors

In Part 1, we looked at how to keep services from exploding out of control and blasting your project budget. In part 2, we look at why a software vendor who depends too much on services is a bad choice.

Software vendors have viewed traditional revenue sources through three lenses:

  • Initial license fees

    The ‘up front’ costs of acquiring the right to use the software. These fees are frequently viewed as an offset to sales and marketing (customer acquisition) costs.

  • Recurring license fees

    These are typically maintenance fees paid periodically to gain access to fixes, upgrades, hot line services, and the like. These fees support research and development, customer support, and administration functions.

  • Services fees

    These are the fees paid for consulting and training services provided by the vendor, and are typically billed by man-hour.

The Software-as-a-Service (SaaS) or cloud models have changed the perspective only a bit – SaaS revenue models have essentially zero Initial License fees, and charge higher recurring fees for the right to use the software.

In each case, software vendors seeking revenue growth will frequently look to the services group. Services revenues can quickly dwarf initial license fees – for many years, the services revenue associated with ERP implementation projects was more than 4X the initial license fees. More recently, implementations of big ERP systems have been cut back to a little more than 1X initial license fees due to improved experience and customer push-back. Unfortunately, I’ve heard some healthcare enterprise implementations driving services revenue at 7X initial license fees or more. Even SaaS companies frequently look to services fees for revenue growth.

So what’s wrong with driving lots of services revenue? After all, IBM used growth in Global Services to meet ever higher revenue targets for much of the last decade or so. And if you look at the hourly rates typically charged for services, they must make money, right?

The problem

A services business makes money based on two key metrics: rates and utilization. The higher they can keep their rates, the more money they’ll make. But utilization is the tougher metric to manage. Consultants cost the same whether they are billing or not. And vendors can’t just fire consultants during lean times and hire them back during growth years – consultants take time to educate and gain experience. But having them ‘on the bench’ is a sure way to kill profitability.

There’s a slippery slope here. Vendors will staff up services to meet peak demand in order to grow revenue. They’ll avoid quick-hit high-value projects that can help you achieve a high ROI in favor of long-term engagements that improve utilization. Then, when demand is down, they’ll have consultants spending a long time ‘on the bench.’ Then the vendor will try to sell services – probably services you don’t need – to avoid under-utilization. While management’s attention is on resolving the services problem, the software suffers.

As a buyer, you may have been tempted to look for a vendor that has lots of consultants available for your project. Or, after reading this, you may be tempted to look for a vendor that has lots of consultants that are heavily utilized. Don’t give in to either temptation. Both are examples of vendors that have started down the slippery slope of services dependency. You’d be better off selecting a software solution that minimizes (or eliminates) the consulting services you need.

Posted in Uncategorized | Leave a comment

Some Problems Don’t Change

[ed. this is the CEO presentation from the HarrisData User Conference of May 22-25].

Logistics: The Soldier’s Perspective

“Troops in action should never have to turn their backs on the enemy to fetch further supplies”

“Troops should not be encumbered with supplies beyond immediate needs”

Lt General Henry S. Aurand, mid-1930s.

As we approach the Memorial Day holiday, it is appropriate to begin the presentation by honoring those who gave so much of their lives for all of us. You may note that the General and I share a name, this is no accident as I am his grandson.

Lt General Aurand was a logistics instructor at the War College in the 1930s. The problem facing army logistics then – ensuring enough supply of ammunition, food, and fuel to front line troops – is similar to the problem facing enterprises today as we focus on just in time production and lean supply chains – and is similar to the problem facing IT departments today as we supply decision makers with the key business intelligence they need when they need it. During the 1930s the General looked for ways to improve the Army’s ability to supply troops in the field, and on maneuvers in Louisiana saw a team from IBM using punch cards to manage steel inventory. It was not until 1944 as Commanding General of the Normandy Base Section that he could order punch cards used to manage inventory – and successfully distribute supplies to the forces liberating Europe.

IBM 917

Many of you have toured HarrisData offices and will recognize our old friend the IBM 917. This one is programmed as a four quarter annual list of the order history file – it is conveniently labeled via plastic tape. Should a friendly sales tax auditor from Minnesota stop by for a visit, you would send a memo to the good people in lab coats to reprogram the IBM 917 to create a list of sales history for Minnesota. Results could take weeks. A business person’s only alternative would be to take scissors and paste to the full printed list.

Programer Productivity Tools

By the 1970s things improved for the people in the data centers. The productivity tools shown (above) include an IBM RPG Debugging Template, an IBM Flowcharting Template, and a Pansophic Systems report layout ruler with a magnifying strip in the center. When combined with a modern keypunch – one which printed characters along the top of the card as shown above – a programmer could more rapidly produce or enhance a computer program such as the IBM System/3 one-up label program (the card deck in the clip) which is among HarrisData’s oldest intellectual property. It is much faster to insert a card to select only order history records from Minnesota than to write a new program. I still use the magnifying strip and flowchart template on occasion.

    Online Processing – aka “Green Screens”
    Client Server – aka “Blue Screen of Death”
    The Web
    Just in Time Production and Lean Supply Chains

The newer revolutions in data processing are listed without pictures – you can still find these in production today. The original online processing of the 80s featured green screens for data entry and report selections, but a business person would still need a visit to data processing to handle ad hoc requests like sales history from Minnesota to hand the sales tax auditor. With PCs the 90s brought client server and the feared ”blue screen of death”. Here the business user could load the entire sales history into a spreadsheet, then sort and select Minnesota orders for the auditor. The 00s featured the web, allowing access to massive amounts of data without overloading the desktop machine. At each step the business user gained faster access to information, and better focus on the important information. On product of all these revolutions was the ability for business people to get key information fast enough to implement Just in Time production and lean supply chains – no more “Just in Case” inventory to cover for delays in delivering information.

Fastest adoption of new technology ever?

    Born: March 12, 2010 (1st Pre-Orders)
    Sold 1 Million in first 60 days
    Sold 2 Million in first 90 days
    Sold 3 Million in first 100 days
    More than 15 Million in first year
    Estimated more than 40 Million this year
    Imitation is the sincerest form of flattery…
    Now more than 25 other tablet brands

Now that we are in the 10s, the tablet computer has taken over for business people. The tablet as epitomized by the Apple iPad is being adopted at an incredible rate – 40 million of them within two years of the product introduction. Why has this technology grown so quickly? Two big reasons: first when you press the on button the iPad is ready for you. No more waiting for a reboot – turning the PC on in the morning, getting coffee, reading the Wall Street Journal, then wondering why you still have to wait before using the computer. Second is the lack of stuff needed to use the iPad, all you need is your finger. No mouse, no keyboard, no cables, you don’t even need training! Just point and go! The only drawback is that when I was younger my mom taught me not to point – pointing was impolite.

The tablet makers have an answer for that as well – Android tablets have a nice voice interface. Just tell the tablet what you want to do, no more impolite pointing. Be sure to say please and thank you to keep mom happy.

And there is more! Microsoft is not known for tablets, but they do have the Xbox gaming system. At a recent press conference the demonstrated the Microsoft ERP system using the active gaming interface – although press reports said it was kind of boring. Imagine controlling your work by using American Sign Language – or combining work with exercise by using semaphore signs—month-end close aerobics for all!

The important thing in this slide is the Computers Arise! poster on the lectern. This poster was produced by Ted Nelson in 1976 – I have had a copy of this poster on the wall of my residence or office for the past 35 years. Once again a slide I share a name with, but Ted Nelson is not a relative – he is the creator of hypertext. If you look at the back side of the poster you will see my favorite software license term of all time: “Make no mistake, these packages are not ready yet.” [ed. the reverse side of the poster has this term, the OmniLicense from HarrisData does not have this term.] Thus Ted Nelson may also be considered the inventor of vapor-ware.

Ted Nelson offers a vision for the future of computing in this poster – it depicts a computer breaking chains and the words “Computers Arise!”. With the arrival of tablet computers, his vision of the future has arrived – no cables, no peripherals just you and your tablet. We all will work with computers in this world in new and different ways. For example I’ll use football referee signs along with voice and finger pointing: [ed. demonstration of imagined user interface.]

(points to record)
(throws virtual flag, folds arms in front of chest)
Delay of Payment. Penalty 5 percent. Re-Invoice.
(points to next record)

[end of demonstration]

Well, it may not look quite like that. Now I should answer your real question, what is HarrisData going to do with tablets?

Hide this slide unless Mousetrap Project is announced!

(throws virtual flag, rotates arms around each other in front of chest)
False start. Loss of Slide.

    • Committed to Product Growth
    – Modernization
    – Smooth Upgrades

• Committed to Choice

    – On premise or Cloud
    – Buy or Rent

• Committed to Quality
Aggressive Quality Initiative

The heck with this slide, lets talk about what HarrisData is doing. Last year when Apple announced the iPad we recognized that it meant big changes for the entire software industry. We also realized HarrisData is in a position to lead those changes. The whole way people interact with computers has changed – our software will change as well for both HarrisData and RTI customers.

The first thing we did was to set ground rules for how HarrisData will implement these changes. All R&D related to the new products will be made available to our current customers by the normal product fix and upgrade processes. All products will be the same no matter how they are delivered – the on-premesis, cloud, and hybrid products are the same code, they simply use different license forms. That means that as you move to HD5 or RTI 6.0, you will begin to receive enhanced tablet functionality as soon as it is available. While the tablet projects like “Mousetrap” are enormous, HarrisData can and will divide the projects into discrete deliverables that can be made available as they are completed. You will see several new features from the “Mousetrap” throughout this conference. The “Mousetrap” projects are on an expedited time-line, look for more information through HarrisData’s Lunch & Learn series as more components are completed.

Thank you, enjoy the conference.

Posted in Uncategorized | Leave a comment

Cloud Reliability Stumbles

Some big names in the cloud world tripped up over the past few weeks, raising questions of how reliable cloud delivery of enterprise software actually is. How big a deal were these outages?

First to stumble was Amazon.com’s cloud services for business. Problems at a Virginia data center put several of Amazon’s cloud customers offline with problems companies reported including “being unable to access data, service interruptions and sites being shut down.” Is this a black eye for Amazon? Maybe not. Amazon offers several levels of service including backup and remote recovery.

But another issue, Mr. Eastwood said, will be a re-examination of the contracts that cover cloud services — how much to pay for backup and recovery services, including paying extra for data centers in different locations. That is because the companies that were apparently hit hardest by the Amazon interruption were start-ups that, analysts said, are focused on moving fast in pursuit of growth, and less apt to pay for extensive backup and recovery services.

Clearly Amazon passes Data Center Management 101 even if some of their customers did not.

Next to stumble was Microsoft’s Business Productivity Online Services. Problems included periodic e-mail outages lasting several days. Apparently Microsoft did not provide backup and remote recovery, as users needed alternate e-mail accounts elsewhere (perhaps on premises Outlook) to survive the interruptions.

Yes, Microsoft’s outage last week was annoying and potentially costly to paying customers. If you’re a current or prospective customer of Microsoft’s Business Productivity Online Services (BPOS), you’ll want to look carefully at how the company handled last week’s outages and what their response says about the long-term reliability of BPOS.

Definite shiner on Microsoft’s cloud credibility, and the level of customer service provided does not inspire confidence.

Third, Google crashed and burned during a maintenance upgrade. This is especially alarming as simple cost effective upgrades are a key selling point of cloud based enterprise software featuring multi-tenant architectures. Some Google customers did not take advantage of a system option to manually backup data to another computer (an on premises PC or Mac). Many customers have not seen their lost data restored even after 6 days.

A Blogger Service Disruption update contains four updates from the last 24 hours, starting with this one:
We have rolled back the maintenance release from last night and as a result, posts and comments from all users made after 7:37 am PDT on May 11, 2011 have been removed. Again, we apologize that this happened and our engineers are working hard to return Blogger to normal and restore your posts and comments.
That’s nearly 48 hours of downtime, and counting. Overnight updates promise “We’re making progress” and “We expect everything to be back to normal soon.”

Google definitely fails Data Center Management 101 and gets a black eye. Unbelievably the Google customer support during the outage was not only poor, but hostile to customers.

Given these events, one would expect unaffected cloud vendors to issue press releases reassuring their own customers that in the case of a data center problem (local crash, maintenance / upgrade fail, network outage, etc.) the vendor cloud structure features server failover protection, remote hot site availability, full backup, upgrade failover protection, and so on. Aside from Amazon as noted above, the silence is deafening.
Instead we are treated to the hype machine shifting to a higher gear.

But the point I raise in my headline is a wider one. It’s about the capacity of cloud application vendors to constantly extend functionality — for all their customers at once — at a much faster rate than the on-premise vendors, who will always struggle to keep up.

[ed. that all at once thing sure worked for Google.] This conveniently ignores the lesson we should be learning – faster is only better if it is safer. Far too many cloud vendors are ”start-ups focused on moving fast in pursuit of growth, and less apt to pay for extensive backup and recovery services.” Enterprises should beware.

At the end of the day what an enterprise needs is an ERP or CRM solution that solves the business problems, is backed by a vendor that supports the business (not the software), manages data center risks including remote failover and full backup, and is affordable. Such systems are provided by customer focused software vendors for whatever deployment option (on premises, cloud, or a hybrid) makes the most sense for your enterprise .

Posted in Cloud computing | Leave a comment

Bureaucracy Blamed for Bad Decisions

As Einstein famously stated, “Insanity: doing the same thing over and over again and expecting different results.”

According to a Wall Street Journal article this morning (Don Clark and; Shara Tibken, “Cisco to Reduce Its Bureaucracy”) Cisco CEO John Chambers has proven he is not insane. Several explanations of Cisco’s recent performance problems have been discussed. This blog has highlighted cannibalization of high margin staples by low margin products and the lack of accountability in decision making (the cause of the cannibalization?). It is appropriate to highlight Chambers’ solution. Per the WSJ Cisco will:

dispense with most of a network of internal councils and associated boards that have been criticized for adding layers of bureaucracy and wasting managers’ time.

Analysts applaud the move.

The question that should be asked is whether the problem was the existence of the collaborative councils, or the absence of executive decision making? To the extent that Cisco’s problems derived from a lack of executive accountability, deferring to collaborative councils for 70% of decisions, Chambers may be overreacting by eliminating the councils entirely. Collaborative processes provide a more complete fact set for executives to consider, we use a collaborative approach extensively at HarrisData for just this purpose. What was missing at Cisco was the accountability for the decisions taken – collaborative decisions diffuse accountability. While accountability at Cisco will be restored by this move, can the executives make good decisions without the comprehensive fact sets collaboration provides? Chambers may have thrown out the baby with the bath water.

Posted in Uncategorized | Tagged | Leave a comment