Monday, December 8, 2008

8 Real Ways to Save IT Costs

Times are tough, everyone is asked to reduce spending. Some IT departments are seen as an expendable cost center - if they are not saving money, they are not doing their jobs. I think this can be a healthy attitude to keep those sometimes-unaccountable IT departments honest and focused on the corporate business. I have a few high-tech ideas for cost savings that can yield real results. If I were the CIO of your company, this is what I would do to reduce costs in the data center and desktops:

1. Use commodity server hardware - do you really need that proprietary big iron Unix server? Wouldn't a Linux blade work instead? Intel x86 architecture can run faster and cheaper than that proprietary hardware, and you'll be surprised at the difference - you might get a 3X performance boost and cut your costs 75% at the same time! Unbelievable? I've seen it myself.

2. Use a commodity server OS
- do you need Unix? Linux would work for 70% savings. If you are not truly married to Windows Server, switch that to Linux too, you're operations people will love it.

3. Use an Open Source app server - use JBoss, its better and cheaper than Oracle or IBM app servers. If you can re-write your Microsoft apps, do that too. Or just halt all future new Microsoft software projects in favor of Java / JBoss apps.

4. Use Open Source database servers - Postgres and MySQL have evolved in the last 5 years to be true powerhouse database engines. If you are using Oracle or IBM, you could save millions by switching your old apps and using Open Source databases for new apps. You may have a master license agreement that gives you "unlimited copies" of Oracle, well... how generous of you to line the pockets of the Oracle salesman. You'll find out that MySQL and Postgres are much easier to maintain too, so you will be able to reduce DBA costs at the same time. Want to get world-class support? Pay Sun for MySQL support or EnterpriseDB for Postgres support.

5. Outsource your email servers to Software as a Service (SaaS)- stop paying for Microsoft Exchange or IBM Notes, use Google Apps. If you don't like Google there are hundreds of alternatives, all with professional grade email and your customers and colleagues will never notice the change when they get emails from you. Pull the plug on your in-house email servers and reuse the hardware or turn them off to save electricity.

6. Outsource your whole data center - do you truly need to worry about having 30 servers in a concrete bunker with UPS, disaster recovery, HVAC, fast internet lines, and monitoring? Hire a hosting company that can do that better and cheaper than you ever could. You can host your test servers, your database servers and your internal web apps. Users won't even know that their apps are really running in a separate data center. You can get dedicated server hardware and fast links to their data center if you need it. Or save a huge amount on VPS services.

7. Use Open Source Office apps on the desktop - OpenOffice.org has truly arrived, with version 3.0 just released, it can read the new docx formats that Microsoft Office 2007 uses. Admittedly it does not behave exactly the same, but I used OpenOffice almost exclusively for Documents, Spreadsheets, and Presentations. I can easily send them to my colleagues who use Microsoft Office without compatibility problems. For email clients you can use Thunderbird to replace Outlook. Google Docs is making some waves but it is still immature and buggy. I look forward to seeing Google improve these apps enough to compete.

8. Use an Open Source OS on the desktop - Ubuntu and Fedora have really come far in the past few releases to offer truly solid Linux desktop experiences. Most Windows users will have no problem with the new UI. This may seem like a risky move, but if you are serious about cost-cutting, I think you'll find that Windows is a commodity in your office already. Now that email and office apps will run on Linux desktops, is Windows a requirement for your business? You may hit some web sites or Word documents that need Microsoft technologies, but I bet that's more rare than you think.

These are concrete, no-fluff ideas that any IT department can use. Obviously smaller, more agile IT departments and companies will find these changes easier to swallow. And some of these ideas might seem pretty risky, but if you want to keep risk down then try things out on new projects first. Later, switching the older systems to complete the cost savings. Its OK to mix Linux and Unix machines in the same data center, and there's no need to switch everything over night. You can get savings project by project to keep some sanity in the environment.

Making these changes won't be easy, many will resist these changes. These excuses are common:
* We already know how these old systems work, lets just buy more and keep it safe
* Open Source is risky, who can we blame for problems, what if we get sued?
* Nobody ever got fired for buying IBM (/Oracle)!

All of these are weak arguments to prevent the hassles involved with any change. Spending new money on old expensive solutions is wrong, and you can pay for professional support from RedHat, Sun, Canonical, and still come out way ahead on costs. At IBM and Oracle's prices, you should be fired for choosing them when lower cost competitors that would fit your business even better.

Cost saving pitfalls. I do not advocate cost-cutting in non-commodity areas of your business or IT. For example, Offshoring software development or customer service can have bad consequences. Quality varies wildly at home and abroad for software developers and customer service experts. Offshoring those essential skills just multiplies the risk by 6 or 10 time zones. I also would not expect the Sales department to outsource the sales department - company image and product knowledge are too important to the business to trust to others who are not invested in your future.

History shows us the way. I remember the switch from mainframes to Unix and Windows in the 90's economy. It did not happen overnight, and we heard plenty of grumblings from the luddites who resist all changes. It started with commodity apps like email and word processing (anyone remember the mainframe spell checker? yechkkk!), and one business application at a time to reduce the risk. We mixed mainframes and server apps for a decade, and we realized that we could reduce costs pretty quickly. As a side-effect we got more choice and better apps.

The same thing is happening in Open Source and the Software as a Service (SaaS) industry. Lower costs, more choices, better products and services. And it starts with commodities like operating systems, database engines, app servers, and apps like email and word processing. I use all of these technologies to save costs at my company.

Don't cling to expensive Microsoft, Oracle, and IBM products that haven't changed much in 10 years. Or you'll suffer the same fate as those mainframe-clingers in the 90's. Instead, use the latest technologies to save money and help your business in these trying times.


- Jay Meyer

Sunday, October 19, 2008

A New Code Metric : Destroyed Lines Of Code (DLOC)

We've all heard that counting lines of code, or SLOC, is a terrible way to measure a developer's performance. Function point counting is not a much better metric, just slightly different than SLOC. Instead, conventional wisdom says that good developers write fewer lines that get the job done better, so SLOC and function points do not reward that good behavior. (Although I admit SLOC does have its place in comparing two systems written by similar teams of developers)

I propose a new metric that rewards good coding practice and simple, brief style : DLOC: Destroyed Lines of Code. You measure the bad lines of code that you remove from the system. If you can destroy lines of code, and the system is still working, then those lines of code were either bad, or just misleading bloat. You can also destroy lines of code by using packages that solve your problems - like destroying JDBC calls and replacing them with Hibernate mappings, or destroying your Factory Patterns and Service Locators and replacing them with Spring dependency injection.

DLOC in Action

To illustrate DLOC, I worked on a system where I was asked to add some features. This system had an admin web interface with a few database tables. So I looked at the admin interface system - about 3000 lines of code, plus JSPs and 10 database tables. First run, I added a table, then changed all the old JDBC calls to Hibernate (which went faster than I expected). I ended up deleting some JDBC code so there were some destroyed lines of code right there. Next, I started really asking users about the app only to find out that nobody really needed the admin web interface. In fact, they really didn't need to edit the data that often at all. They would be perfectly happy with deploying changes at each release instead of making admins use a web interface. So I proposed removing the database tables and going to an XML config file with a similar schema, then removing the web interface altogether. The users could edit the XML and deploy it with each release when changes were needed. In the end, I destroyed all the tables, all the JSPs and much of the Java code, saving only the POJOs. I added some XML parsing, and in the end the whole system was only about 300 SLOC.

If you analyze my performance by SLOC, I scored a terrible score - negative 2700. But by DLOC, I destroyed 2700 lines of code, and the users still got what they wanted. Hidden benefit: new data and features were simple to add to the tiny and simple system.

The rules: DLOC Methodology

Any metric needs rules so that performance can be measured fairly, so DLOC needs some rules:

  • count the lines of code that were destroyed (removed) from the system, bigger numbers are better (in contrast to golf scores where lower numbers are better)
  • destroyed comments also count - comments can lie, and removing bad comments is value added to your system
  • lines of code you added while changing the system do not count for you nor against you, after all, we don't know if those new lines are any good. We already know that we dislike SLOC, so we'll avoid looking at added lines and only count destroyed lines

Obviously this DLOC system can be cheated just as easily as SLOC or function points: I could write a few thousand lines, only to purposely destroy them later. Also, you would expect a low DLOC for a brand new system where you've got to start from scratch (it seems I rarely have this luxury). DLOC is truly interesting on a system that is aging and even more interesting if the system was not constructed quite so perfectly.

Proper DLOC Usage

Truly caring for a software system has caused me to apply different techniques to make the software better. I try to use Agile Development, Test Driven, and even Broken Windows Theory when I am working on a system. DLOC is more of an observation than a methodology, but I think it's rewarding and fun to measure DLOC while improving a system. So here are some ways you can make your software system better, and increase your DLOC score too:

  • Introduce new technologies into your system: Hibernate, Spring, even converting from Java 1.4 to Java 5 can reduce complexity and destroy lines of code (e.g. annotations, typed collections)
  • Ask your customers what they like/dislike about the system, you may find that certain parts of the system can be removed or refactored
  • When you see code that could be improved / destroyed, change them immediately, don't procrastinate, this is Broken Windows for software

So improving software is the goal. And DLOC is a fun metric you can use to measure the changes you make. You can use DLOC to prove to yourself that the changes you made were a big impact on the system. Be proud of yourself and please help that poor software become better by destroying those unneeded lines of code. If you don't destroy them, who will?

-Jay Meyer

jmeyer at harpoontech dot com

Friday, May 16, 2008

Cluster your application - NOW!

Is your Java Application Server clustered? Why not? It should be, and your reasons for NOT clustering have vanished, so there's no excuse. Cluster. Now.

When designing a web architecture for an application, this question always comes up: "To cluster or not to cluster?" Clustering allows two or more app servers to act as one. This has many advantages, such as high availability if one server goes down, or the cluster can enable seamless upgrades to software or hardware. Clustering also allows a successful application to scale by spreading users across more and more servers. Unfortunately, clustered servers are much too rare today. But they shouldn't be rare, clustering should be the default, the normal case.

Question: To cluster or not to cluster?
At first the answer is "no", I mean... clustering make the installation more complex, there are performance impacts, and configuration problems and... well... RISK! But the benefits can be pretty great for an app that is growing. So if we can eliminate the RISK, then we have no reason NOT to cluster.

Answer: Always Cluster.
The time for clustering every Java application has come. Worrying about statefullness was a habit we acquired when SFSBs in EJB2 were dangerously bad at clustering (and a bad solution for many applications). But today, JBoss, Tomcat, and other Java app servers have matured to make clustering a solid part of any server. Terracotta has even come in to save the day for clusters with large node counts, or with huge memory needs, so the reasons not to cluster become smaller.

RedHat is telling everyone to virtualize Linux.. ALWAYS! And it makes sense. Servers out there need the flexibility of virtualization, and the reasons to NOT virtualize have vanished. It's time has come. Technologies like Xen and VMWare have improved, but more than the technologies, the people have matured to understand the benefits and risks of virtualization. So now it's packaging and marketing that are driving virtualization as the default answer.

Java application server clustering should now follow and become the default installation. The technologies are solid, so the clustering idea is just in need of good packaging such as easier installation and configuration utilities, and good marketing in the form of endorsements and documentation about the many benefits of clustering.

The benefits:

  • High Availability - Server crashes, app stays up, 'nuf said, this bene' is obvious.
  • Scalability - more users? add more servers, clustering lets your app grow - also an obvious bene'
  • Hot Deployment - this bene' is rarely understood, but it will save your weekend! Stop deploying on a weekend at 2am! Deploy on Tuesday at 11am, yes, while users are still using the system.... You see, most code changes do not require huge data conversions and outages (although those cases do happen). Most deployments are to fix a bug, change a label, add a brand new feature, or add more memory to the server - and those changes do not require downtime if I have a cluster and a deployment process that can make it HOT. Its quite simple - 1)take a server node out of the cluster, 2) upgrade the code, 3) put it back in the cluster, 4)repeat on other nodes. No late nights, no weekends -and no lost sleep. Developers and admins are handy and awake if anything goes wrong, no need to start that late-night conference call. Freedom...

Believe it? You may not, but I've witnessed it. My system took a millions of hits a day, and we deployed twice a day at times - between 8 and 5. We had a cluster of JBoss application servers and did the deployments in broad daylight without any affect on users. And all this with out-of-the-box clustering from JBoss.

If you want to talk RISK - try taking an outage at 2am on a Sunday for 5 hours after 3 months of intense development by a large team. Put in a massive code change and pray that the deployment goes well... Now that's the stuff that nightmares are made of.

So cluster your Java servers, the benefits are big, and clustering comes with every application server - it probably came with the server you've got. No excuses, do it now.

-Jay Meyer