Thursday, December 9, 2010

High Performance Java Environments

For the past several weeks I have been monitoring a set of Java business tier and portal servers for performance issues related to load and possibly configuration.  Both servers run in Windows 64 bit with 64 bit JVMs.  The portal server is using low-pause concurrent mark sweep garbage collection while the other is using a more traditional 32 bit garbage collection algorithm.  I am considering adding additional CPU cores to each box.

While I was trying to work inside the tight boundaries of Windows processing, I longed for the Linux world of Java Application Servers.  Where I work, Linux is not an option, so I started to explore additional options.  That is when I re-stumbled upon Azul Systems. Azul has two products, Zing and Vega 3.  I have asked for a demo/trial of the Zing product.  Zing seems to be a specialized JVM and then some.  It requires 16-24 GB minimum RAM and 4-6 CPU cores, also minimum.  Zing boasts the ability to execute JVM heap sizes of up to 1 TB.

My plan is to test the Zing product to increase performance and possibly reduce server counts.

Java Parallelization Options


Last night I taught a class on Monte Carlo Simulation (MSC) using Excel and Crystal Ball (Oracle).  This was part of an ongoing course on System Modeling Theory (a.k.a. Management Science) that I am teaching at Strayer University.  In modeling we use MCS to simulate the probability distribution of uncertain model parameters.  This helps us understand the uncertainty and potential risk of varying input parameters of the problems we are attempting to solve with our models.

As I was executing the simulation in Excel/Crystal Ball with a Normal Distribution Curve and 1000 trials, my mind started to wonder about how I would do this in Java.  Given my experience with Java numerical computation I theorized that I would need more resources than just my laptop if I were to pursue more complex model simulations that included many more uncertain input parameters and model permutations in a JRE.

With the advocacy of Cloud Computing everywhere these days, I have been tracking the progress of Java-based parallel and grid computing efforts.  I have noticed a few solutions that would seem to fit the bill for the more complex numerical data computation that I think I would need to tackle complex problems and financial models with Java.

Hadoop
According to its developers, Hadoop is open-source software for “reliable, scalable, distributed computing.”  Hadoop consists of several sub-projects, some of which that have been promoted to top-level Apache projects.  Some of the contributors of the Hadoop project are from Cloudera.  Cloudera offers a commercialized version of Hadoop with enterprise support, similar to the model that Redhat has with its RHEL/Fedora and JBoss platforms.

In a nutshell, the idea behind Hadoop’s MapReduce project, and its associated projects (HDFS, HBase, etc.), is to perform complex analyses on extremely large (multi-terabyte) data sets of structured and/or unstructured data.  The storage and processing of these huge data structures are distributed across multiple, relatively inexpensive, computers and/or servers (called nodes), instead of very large systems.  The multiple nodes form clusters.  The premise behind Hadoop as I understand it is to encapsulate and abstract the distributed storage and processing so that the developers do not have to manage that distributed aspect of the program.

Hadoop’s MapReduce project, written in Java, is based on Google’s MapReduce, written in C++.  It is used to split the huge data sets into more manageable and independent chunks of data that get processed in parallel with each other. Hadoop  MapReduce works in tandem with HDFS to store and process these data chunks on distributed computing nodes within the distributed cluster.  The use of MapReduce requires Java developers to learn the Hadoop MapReduce API and commands.

Grid Gain
Grid Gain is another solution for distributed processing, including MapReduce computation across potentially inexpensive distributed computer nodes.  According to Grid Gain, their product is a “Java-based grid computing middleware.  There are many features to this product, including what they call “Zero Deployment."

While Hadoop comes with HDFS that can be used to process unstructured data, Grid Gain does not use its own file system, but instead connects to existing relational databases such as Oracle and MySql.  Hadoop can use its own high performance HBase database as well and I have heard of a connector to MySql.  Hadoop seems to provide more isolation for task execution by spinning up multiple JVMs per task execution.  Grid Gain seems to come with more tools for cloud computing and management.  Finally, though Hadoop is written in Java, its MapReduce functionality can be used by non-Java programs.

Aparapi
Aparapi is another API that provides parallel Java processing.  Unlike Hadoop and Grid Gain, Aparapi translates Java executable code to OpenCL.  OpenCL is Apple’s patented framework for parallel programming.  The fascinating aspect of Aparapi and OpenCL is what it is designed to execute on.  OpenCL uses Graphical Process Units (GPU) for parallel processing.

In my past life I was more connected to hardware than I am today and I worked with Digital Signal Processors (DSP) and Field Programmable Gate Arrays (FPGA) with Analog to Digital Converters (ADC) and Digital to Analog Converters (DAC).  We used DSPs to process waveform data in near real-time.  We would capture the waveform data on our I/O cards and then offload the processing and transforms to a DSP.
I guess this is why OpenCL interests me so much.  With OpenCL, developers can write code that gets compiled at run time so that it is optimized to run on existing GPUs in a computer or against multiple GPUs in multiple computers.  Based on the “C” language OpenCL allows developers to use graphics chips like those from NVidia.  Imagine that for a moment…while most of the parallel processing world is harnessing grid and cloud computing power, OpenCL is focusing on a much cheaper hardware footprint.  In fact, Apple developers can use OpenCL on their Macs to harness the computer power of the installed GPU to perform high performance computing tasks.

With Aparapi, Java developers can now translate their code to be executed in the OpenCL framework.  The use of GPUs for parallel non-video processing is called General Purpose Computing on Graphics Processing Units (GPGPU).  Unlike CPUs that execute single threads very fast, thereby giving the illusion of multi-threading, GPUs have a parallel architecture that allows true simultaneous execution of threads.

Beyond Aparapi there is JCUDA, JOpenCL, and JOCL.  While JCUDA, JOpenCL and JOCL are based on JNI wrappers of OpenCL and NVidia’s CUDA, Aparapi takes a different approach and uses byte code analysis to translate Java to OpenCL executable code.

It remains to be seen which platforms and techniques will emerge as the standard.  More to come as I explore some of the Java parallel programming.

Monday, November 29, 2010

VA SPQA 2011 Board of Examiners

I have been accepted to the 2011 Board of Examiners for the U.S. Senate Productivity and Quality Award Program for Virginia.  These links explain what I am talking about:  About SPQA for Virginia and SPQA Examiners.

Unemployment and Education

On the drive home from work last week, I heard a story about unemployment statistics on NPR.  The person was saying that unemployment rates should be viewed in a more stratified manner, when it comes to the correlation to education levels of the American workforce.  I can't remember the exact numbers, but they were very similar to the 2009 numbers reported by the Bureau of Labor Statistics (BLS), except for the unemployment rate was closer to 10% on average.  In this chart from 2009, it is easy to see that more educated workers have lower unemployment rates.  This does not take in the consideration the number of potential workers who have fallen off the unemployment lists.  These individuals are "unemployed" if not technically reported as such.  This would drive the unemployment rate up to around 15-18%.

Sunday, November 21, 2010

Winter 2011 Teaching Schedule - So Far...

It looks like I will be teaching BUS 515 (Operations Management) at the Henrico Campus of Strayer University this Winter, starting in Jan 2011.  Thus begins my fourth year of teaching at the post-secondary level, and my second year of teaching at the graduate level.

Monday, November 1, 2010

J2EE design patterns for Performance and Scalability

From the LinkedIn J-Archtiect Group, The question was:  Can someone suggest any J2EE design patterns that are available for performance and Scalability of enterprise applications?


First let me say that I am somewhat of a purist and MVC should be the starting point.  Beyond that, it’s far too easy to get into a holy war about our favorite patterns.  To me, your question was asked in a vacuum without any context, other than the assumed JEE context.  Do you have a "green-field" or are you working with a legacy application?  I like DTO/VO, DAO, Service Locators, facades, delegates, etc.  However, with newer JEE components and other vendor solutions, we now also have to contend with ORM, IOC and DI more than we did in the past.  Now the entity POJOs in EJB and DTOs can be one in the same, if you so choose (not trying to start another holy war here).  Most of the time, we have a heterogeneous and complex mixture of these components in a single application.

For any scalable solution, caching in my opinion is a must. However, you have not mentioned which is your primary direction of scalability, vertical, horizontal, or a little of both.  I have always liked the fly-weight pattern for caching, but it too assumes a strict context with regards to data structures that might not be acceptable given certain application designs.  It seems to work very well in portal applications, but is a little more difficult to utilize elsewhere without in-depth data structure and object behavior design. 

When designing JEE applications for performance, I always have to reach down into bowels of the system, below the JVMs to choose the right hardware (OS, 64 bit, multi-core, etc.), networking (bandwidth, VLANs), and database systems and configurations.  There are also the integration layers:  Do you need messaging?  Sync or Async? 

I also reach upwards above the JVMs for the right front ends with regards to load balancing and fail-over.  I am a fan of separate HTTP stacks from your Java Application Servers as well as CMS for non-application content, and I also have to have hardware load-balancing and fail-over, like F5 BIG IP or Redline/Juniper.  With horizontal scaling, you have to ensure that your applications are designed and optimized for clustering and that sessions are managed and replicated effectively and efficiently.  Then there is the whole view layer category of components; you now need to make sure that you design for usability and reduce network traffic as much as possible via RIA/AJAX.  After all, application performance is largely a perspective of the users utilizing the system to get their work accomplished.

You mention garbage collection, but even that is subject to application and object types, as well as server configurations.

To me, the consummate JEE performance pattern is really avoiding the two ANTI-PATTERNS of trying to engineer performance solely in the JEE layer and not building in mechanisms to observe application performance.  First, one has to know within which layer to engineer the correct components:  business logic in the JEE application, but data logic in the database; session management at the web/application tier, but load-balancing above that in a network hardware configuration.  For performance engineering, JEE or otherwise, a more holistic approach is necessary, and that approach will guide the implementation of the lower level component patterns.

The second ANTI-PATTERN is not planning for application observation.  All too often, we do not choose designs that allow us to observe application performance.  With AOP, we have had the ability to design in cross-cutting logic to look at application execution; however, far too few of us use such techniques until we see the symptoms of poor application performance.  And by then, it is more costly to re-engineer.  With EJB 3.x, there is an easier way of designing and implementing DI and AOP from the beginning, but many still do not look at AOP during the initial design of the application.  In the business world we use the paradigm of Plan-Do-Check-Act (Demming).  In the JEE design world, we plan and do, but I don't see a lot of designs that plan for checking and acting.  We check when there is a reported issue, and we act by fixing or re-designing what should have been part of the design in the first place.

Keep in mind that we did not even mentioned security engineering and if you need end-to-end, point-to-point or a mixture of both.  Of course security engineering should also be designed in when we are designing for performance.  They are not necessarily mutually exclusive, but sometimes security and performance collide in poor designs.

Wednesday, October 6, 2010

Fall Teaching Schedule

I am pretty busy this Fall with part-time teaching.  At Strayer University I am teaching BUS310 (HR Management) and CIS331 (System Modeling Theory).  At the PMI CVC Fall PMP Test Preparation Workshop I am teaching Frameworks.

Cross-Origin Resource Sharing - Preflighting AJAX Requests

A few months back I was ranting (I do that sometimes) about XSS and its negative affect on AJAX application development.  Just recently I read about the Cross Origin Resource Sharing standard from the W3C.  It is some pretty dry reading, but this Mozilla Developer Center article distills it into something usable.  From what I understand there is hope for AJAX as long as you are using a modern browser that supports "preflighting".  Of course you also need a server-side layer that filters these preflighted requests and returns the authorization to send the "real" AJAX request from the cross-domain web browser application.

Again this only works if your browser supports it, and we all know how notorious cross-browser support can be.  Case in point, we are still using IE6 at my day job.

Sunday, September 26, 2010

My Favorite Monopoly

As an adjunct professor in post-secondary academia for the past 2.5 years, I have taught several business courses at both the undergraduate and graduate levels.  Part of the curriculum includes topics related to economic systems, market types, and competition types.  I can honestly say that in undergraduate level business courses I routinely present the NFL as a contemporary example of socialist markets, monopolies, and monopolistic competitions.

As far as I know, a monopoly is a market condition characterized by many buyers and one seller.  The one seller (or enterprise in this case) can control pricing through its exclusive control on supply.  That sounds like the NFL to me.  I mean really, where can I purchase professional football outside of the NFL, as pervasive as the NFL.  In the past several years the NFL has protested its innocence in government anti-trust proceedings, all the while saying that its individual teams act as separate businesses in the respective markets.  Really?  Is our government that stupid to accept that explanation?  Well, reader, don't answer that; it is too depressing.

At a minimum that makes the NFL the key players in a monopolistic competition, maybe even a socialist enterprise system.  Where is the socialist-hating tea-party in this argument?

In these economically distressed times, the NFL continues to force its monopoly on consumers by steadily increasing  ticket prices.  Now, one could argue that the laws of supply and demand should kick in at some point and ticket prices would start to shrink when supply exceeds demand.  Meaning, at some point, the consumers will stop buying tickets until the NFL teams start reducing ticket prices.  However, the NFL has another monopolistic technique.  Since the NFL virtually owns the major networks, the NFL actually forces television blackouts in local team markets when ticket sales are poorer than expected. So, again, the consumer is at the mercy of the monopoly.

Monday, September 6, 2010

From the files of "You can't make this stuff up!"

This was so strange that I had to share it.  And I swear that I did not make it up.

While I was working for one company, I started to look for alternative employment with another.  It just so happens that I had an interview with a company to be a technical lead on their Java team.  I interviewed for several hours, talking to individual contributors, architects, and managers. In the end they passed on me; according to them, I was not a good fit.  Now mind you, this was after I had been writing Java web applications for decade and I was running my own development team of about 30 people.

A few months later a new position with that same company came up and I was contacted by a local recruiter.  I informed him that I was already bypassed for the first position, but he still wanted to try.  He returned to me with feedback from the company that they were not going to pursue an interview with me due the last interview I had.  Nice!

At about that same time, my manager's manager came to me with a request to help one of his colleagues.  It seems that his friend was getting ready build a new development team, and since I had managed a very large team with disparate technologies my Mgr's mgr thought that I would be a good mentor for this person.  I said yes without even thinking to check out who this person was.  It turns out that this friend in need was the manager that passed on me (twice) from that other company.

So, I went ahead with the mentoring session with this guy.  All the while I was wondering if he knew who I was, and if he did know why was he still asking my advice after he passed on me (Did I say twice, yet).  He finally let on that he did know who I was, and that is where it got even stranger.  He still wanted my help, and he still thought I was not a good fit in his organization.  So, even though I did not fit into his "new" organization, I was helping him design it.  I could not make this up.

Sunday, September 5, 2010

Tired of Pushing Rope - A Cathartic Rant

I left my last position with a major PC manufacturer and services company because I was simply tired of pushing rope, a.k.a. "Hacking Work".  Everything that I tried to do for the organization or our customer seemed hopelessly complex and laborious as though I was trudging threw quicksand or trying to push rope.  I really started to doubt that I was the right person for the job.  I felt as though I was failing.  Now that I have had time to reflect on the recent past, I realize that what i thought was my failure really resulted in a success:  I walked away with my integrity, and work ethic (not to mention ethics in general).

Since then I have talked with friends, colleagues, and customers.  It wasn't just me. However, no one is harder on me than me.  So I know that I did not walk away from that position clean.  I failed to deliver all the value that I wanted, to the team I managed and to the customer I supported.  In fact, when I stepped down from managing my team, it was because I no longer felt adequate and I was burned-out.  That was somewhat of a selfish motive I guess.

During my time there, I witnessed how we as a delivery and outsourcing company failed to deliver.  In fact our delivery methodology was broken.  Several customers came to me and told me that we were too slow and too expensive.  I know that I did not have all the right answers, but I did know that since I was there we routinely delivered late and mostly with reduced customer satisfaction.  Therefore, I pursued Agile/Scrum training.  I was very excited about the iterative approach and I put together a pitch to my management about this approach.  It was shot down immediately.   Never mind that we were not delivering late and sometimes not at all, never mind that our delivery was essentially broken and over-priced, no one wanted to hear what I had to say and business went on as usual.  Our technical project teams were exercising a "Hope-Driven" approach.

In my opinion we were more afraid of what the customer would think about our new approach.  We could not show that we knew something was wrong with our approach, no matter how inadequate it really was.  We were also afraid to introduce any new process to the customer as we thought we were already perceived and heavily process laden.  I continue to say we, as I did fail to convince anyone of the virtues of changing our delivery model.

One of the major issues was how we traced requirements to testing and sign-off.  We actually had no formal requirements traceability.  Sure, we collected the requirements, but we never linked them directly to verifiable and repeatable test cases.  This became evident when I witnessed ad-hoc  testing as part of the System and User Acceptance testing cycles.  In my opinion the customer simply signed off on the projects when they were satisfied that we could deliver no more value if we continued.

As a benchmark, I consulted with some of my colleagues that also work in the IT Application Services and Consulting industries.  I described how we did business with regards to requirements linking to testing and user acceptance.  They were shocked, as I expected them to be.

Thursday, September 2, 2010

My Time as an Application and eCommerce Architect - Why, What, How

A friend of mine asked an interesting question earlier:  Can you tell me about your experience as architect with Dell/Perot? How did you position yourself to be considered for the role? Did you enjoy the job? Was the role more technical/hands on in nature or more conceptual/framework oriented (Zachman/TOGAF)?

As far as the Dell architect gig goes, I loved the role of trusted adviser to the customer and CIO.   It was less technical and more conceptual.  I was one of several architect types focusing on their own experiential domain.  My domain was fairly wide, containing non-mainframe applications including web and client/server.  As you can guess, this also included various messaging architectures as well (MQ, WS, ESB, SOA).  I guess my knowledge and experience was part of the positioning that I relied on for this role, but I actively sought time with senior management before I was in this role, to recommend that I actually get this role.  I found myself selling the virtues of the role as well as my unique qualifications for it.  It took over a year of lobbying to get into it and get the autonomy that I needed to succeed.

As an architect, I got more face time with high level business folks, as well as the CIO and his VP of innovation.  I routinely found myself in the CIO’s office or in meetings with him, vetting his latest technology direction or decision.  Even if there was a full crowd in the meetings, the CIO and I would be the main two discussing the topic on hand.  Since I moved in to the role from a development team manager role, I had a perspective on development activities as well.  However, this also worked against me as directors and VPs would come to me directly for their needs and try to circumvent the change control process.  I got the reputation for getting things accomplished, but my management wasn’t always happy about that informal process path.

I spent considerable time writing architecture documents that described direction, or assessment of technology, or even documenting our position on a particular technology.  In those docs I used mostly the enterprise, technology, and systems model layers of Zachman.   I really had to understand to whom I was trying to communicate to craft my docs appropriately.  I wrote docs to guide the teams as well as to elaborate to senior management.  

I used TOGAF to help guide me toward a target architecture, though none of my peers used it.  I am no expert myself.  The TOGAF ADM was helpful in so much as it laid out a process for achieving a target architecture.  When I left Dell, I was in the middle of developing a “building permit” process for our technology projects.  As an architect I was cognizant of the lofty positions I could take on technology and tools.  I wanted to be an enabler to the business as well as to the development teams, and the last thing I wanted to do was introduce more process and obstacles to the technology delivery cycles.  The building permit process was going to be a contextual checklist that developers and leads would fill out, in order to proceed with a design.  Based on the answers on the checklist, architecture design reviewers would know how deep to dive into the design based on how pervasively the design affected the published enterprise architecture.  I never delivered this tool, but I still think it is a good idea.

My technology experience and knowledge was very important to my architect role, but more importantly was how I communicated.  I needed to know my topic well enough to remain confident when I delivered a decision or recommendation.  I tried to never be heavy-handed, but I also needed to be firm to exude confidence and decisiveness.  I was looked upon as a leader and I needed to function as one within my domain.  I also worked on my listening skills as well as adapting my non-verbal communication techniques.  I consciously tried to lessen the negative non-verbal communication; it was not easy.

Tuesday, August 31, 2010

Landed (or stepped) in Linux - OpenSuSE 11.3 on my Alienware Area 51

I threw away Windows and loaded openSuSE 11.3 on one of my laptops (Alienware Area51 that I purchased in 2003).  Yes, you heard right, a 7 year old laptop.

Linux loaded fine, ran great, except for the WIRELESS support.  It took me a few nights of gcc compiling, kernel module loading and blacklisting, but I finally got the Ralink rt35xxsta driver working in OpenSuSE 11.3.  This was to get a non-supported Lynksys/Cisco AE1000 USB WiFi device working.  It seems to be OK, but every now and then it asks to re-associate with the AP.  And it asks for my root password when I boot.

But I am not complaining here.  Stories of Linux/Wireless woes are legion.  I am actually one of the lucky ones.

Sunday, August 29, 2010

I am threw with Windows and PCs

Thank you Microsoft and Toshiba, I have finally seen the light.  In 2008, while working at Circuit City, just before it went belly-up, I mistakenly purchased a Toshiba Satellite.  I say it was a mistake because now only 2 years later, I have a door stop that used to be a laptop.  First the hard drive failed just after 2 years.  That is just unacceptable.  I have an Alienware that I purchased in 2003 that still screams.  I have a Dell that I bought in 2000 that still chugs along, albeit slowly (mostly a Windows pollution issue).  At the time I did not know that Satellite was Toshiba's code for poor quality.  Perhaps Toshiba is Japanese for crap.

I purchased a new hard drive and it took 4 hours to prep it after I installed it.  Then I tried to reload the Windoze P-OS and alas the CD ROM is now misbehaving.  That's for hours of my life that I will not get back.

While I was trying to fix the POSHITA laptop, I was also trying to download a 4.7 GB Suse Linux ISO image.  It is really not important what I was downloading except that it was Linux.  I am looking for an alternative to Win-blows.  I could not even attempt it with Microshaft's Internet Exploder, it doesn't handle files that large.  I had to use Firefox.  However, it kept locking up after about 2 GB.  So, I downloaded Google Chrome and it downloaded the ISO finally.  Three web browsers later, I then tried to write the image to a firewire DVD burner.  Alas, failed again.  Every time I would try to write the file, Windows would say that it could not continue.

I then tried to copy the image to a thumb drive.  However, Windows complained once more.  This time it said that the drive was full, when there was over 12 GB free space.  I was able to finally copy the files down individually.

This would be funny if it wasn't true.  Twenty years ago it was challenging trying to get PC hardware to work with the correct IRQ or even DOS.  Now, it is just annoying and costly.  PC manufacturers don't see their piss-poor quality as an issue because MacBook Pros are so expensive.  And most companies do not use MacBook Pros because they are so expensive.  However, I have never met an unhappy Mac owner.  Furthermore, the people in the Apple store just look happier.  I never saw that kind of happiness in CompUSA or Computer City (both of which are now defunct).

It is not just Mac; I have worked with Unix and Linux in the past, and lately (the last 4 weeks) I have been concentrating on Linux again.  If anyone spends anytime with that OS they realize how they have been lulled into believing that only Windows delivers value to organizations.  Microsoft has most of us, companies included, by the short-hairs.  Everyone wants productivity and most software runs on Windows.  It's the ultimate catch-22.

I really stepped in tech-sand this evening, like so many times before.  Again I was a victim of software and hardware vendors' planned obsolescence.  I see it as just another episode in series of minor Greek tragedies that I euphemistically refer to as PC repair.

Saturday, June 12, 2010

IT Hero Worship is not a Successful Long Term Strategy

Are you an IT hero?  Have you ever worked late at night on an IT project (of any size) because you just did not have enough time during normal work hours to get it done?  Mind you, I am not saying that this type of hero is all good or all bad.  My biggest strength is “learner”; I love the idea of learning, even less than the end result of the knowledge gained.  That desire to learn has fueled many nights of heroism in the face of languishing projects.

The main issue I have with this type of hero is not the heroes themselves but how they are misused, and relied upon in many IT organizations today.  Just as hope is not a strategy, heroism should not be a strategy for making project deadlines.  In my experience most IT project failures are not technical in nature.  I have really never stopped a project because I could not accomplish something through software or hardware technology whose capabilities are fully understood.  However, I have witnessed how knowledge (or the lack thereof) and poor processes (or poor process execution) have stopped projects completely, or until new paths forward were found.

Process is the major culprit.  Under the umbrella of process I include project estimates, development methodologies, project management, and thought processes (for starters).  Of all of these, thought processes are the hardest to correct, and the most insidious.   For, it is here where the false paradigm of hero worship and reliance are engendered.

I submit that heroes are needed when we fail to execute.  I have lived this.  I have been pulled into projects midterm when the paths forward were clouded with process failure disguised as technology shortcomings.  Better processes would have led to better understanding of technology boundaries for a given solution.  This is actually where architecture improves the outcome, but that is a story for another time.

I am not saying that I have not fed off of this from time to time.  In fact it is somewhat addictive.  In the end, however, the repeated need for heroes inevitably leads to burn-out and morale issues.  It is not sustainable.  Examined from a business perspective, it leads to inefficiency and waste.  Perhaps this is what Russell Mullen and Steve Caudill discuss in their book “A Hero Behind Every Tree - The Non-Technical Reasons Your IT Investments Fail.”  I haven’t read it yet, but the title, description, and reviews seem to suggest the ideas that technology is not to blame for IT project issues or investment failures and heroes are not a strategy.  Even if I did not personally know the authors, I would recommend it for that alone.

Low Fidelity Prototyping and Plain English

When I was trying to cover Social Networking the other night in a business class lecture I tried to prepare for inevitable questions from students that had never seen some of these tools.  It was then that I literally stumbled across Wikis in Plain English.  Have you seen this Plain English series?  They are both entertaining and informative, which in my opinion makes them quite effective.

As I was watching the video on Wikis I immediately started drawing parallels between the Plain English series and Low Fidelity Prototyping (LFP), a.k.a. Paper Prototyping.  I used LFP to capture and present UI and workflow mock-ups quickly, in front of customers.  However, its intrinsic value in my opinion was realized in the information gained and relationships built by engaging the customers directly.   Everyone knows that not all customers are created equal and some customers find it difficult to contribute to project efforts, even though they are the single best resource for defining usability aspects.  In my opinion LFP is the simplest (not to mention inexpensive) and most effective way to get these customers to create with you.

After almost 20 years in IT I have trudged through many different movements in GUI and workflow design session tools and techniques:  JAD, RAD, Wire Frames, MS Visio.  However, LFP has always been the most flexible tool for quickly reaching customers and capturing the essence of their thoughts.  Of course once this information is captured it should then be distilled in to more readable models to ensure that manager types and end-users know that we as IT folks understand their world.

With the CommonCraft Plain English series here is an LFP in motion, with a twist of entertainment added.  If is very effective, conveying the intricacies of Social Networking tools, or even Borrowing Money.  In fact, I see the CommonCraft Plain English series as a aid to help us learn the effectiveness of LFP and how to use it to communicate with high fidelity.

Wednesday, June 9, 2010

Tech-sand vs. Technical Debt

In a recent conversation I was asked to explain the difference between what I call Tech-sand and what Ward Cunningham called Technical Debt. First, I do not see them as the same thing. In fact I see technical debt leading to tech-sand. The more technical or design debt that a business incurs, the harder it can be to get out of debt and move forward. The business is then stuck in tech-sand, not able to move forward, and not able to easily move back and undo what got them there in the first place. And don't even get me started on the irrelevance of considering sunk costs.

Yes some organizations plan for technical debt. In fact, I would argue that most organizations realistically have to absorb some amount of technical debt to remain proactive and competitive. Let's face it, IT is a commodity that is only differentiated by how well it is aligned with business and how well it is used to build barriers to erosion of competitive advantage. The idea behind knowingly incurring technical debt is to pay it down by incrementally replacing components or systems before "interest" payments (in the form of increased maintenance costs) become too large a part of yearly budgets or before aging systems are no longer nimble. Steve McConnell explains it well in his take on technical debt. I see it simply as how leveraged your organization is with technical debt. The more technical gearing you have the less efficient you are.

I argue that my term, Tech-sand, is broader in scope than software and hardware design and development. It is actually a worse case result. It can be the result of many different architecture and design decisions that are not well thought out, or based on politics or flawed financial models that do not understand the TEI (Total Economic Impact). Not understanding the true TCO of a solution can also lead to technical debt and and complete misunderstanding of what it means to service the technical debt.

Tech-sand can also be compared to a big ball of mud. However, again, it is not limited to strictly design and development of software or hardware.

The concepts of technical debt, servicing technical debt , and TEI should be looked at with the same rigor and systematic approaches that we use to judge the financial worthiness of companies.
Apply the ratios. Today's IT budgets primarily go to operating expenses, easily 60% - 70% in some cases. How much technical debt does a company have and how much of their budget is used for operating?

How well a company manages IT and how well they make important technical and architecture decisions affect these operating budgets. Simply put, the more technical debt IT incurs, the more money in the operating budget it will need to service said debt. The trick here is to quantify this debt and monetize the budgetary aspects of its effects. Adding more people to the budgeted workforce to service a poorly designed or out of date system surely adds to the operating costs and can be seen as resulting from technical debt. Spending more every year, after factoring out customary software and hardware annual increases, points to a disturbing and identifiable trend. The IT department and more importantly the business is incurring technical debt faster than it can pay down the principle of said debt by replacing aging, poorly performing, and/or poorly designed systems.

Sooner or later these poorly performing, and myopic organizations will find themselves in Tech-sand. At that point, incremental steps are no longer adequate and major initiatives are needed.

Social Networking and Innovation

Is the use of Social Networking tools considered innovation? In an article I wrote several years ago, I defined innovation as "managing the processes that lead to the introduction of new
ideas, methods, and technologies that provide business value". While I am not sure that using Social Networking tools is innovation, using these tools can be innovative and lead to new ideas. In other words it can lead to or at least help innovation along. In an article I read by Jeffrey Phillips on the Innovation Tools web site, he declares that Social Networking is not innovation. Again, while I agree with most of his points, I think that Social Networking used correctly, including filtering noise, is invaluable to innovative teams.

If any thing, Social Networking is emerging technology if you consider as I do that emerging technology is technology that is not currently in use by a business. It's not just leading or bleeding edge stuff.

Tuesday, May 18, 2010

Lecturing on Social Networking

I am an adjunct professor at Strayer University. Lately I have been teaching a graduate-level business course (BUS508) and last night I gave a lecture on Social Networking. I focused on the tools, their utility, their potential business value, and how they could potentially disrupt business processes. Included in my talks were these tools and sites (I know there are more):

- Twitter
- Facebook
- Threaded Discussions (www.experts-exchange.com)
- OpenSpace
- Google Buzz
- Google Docs
- Google Reader
- Web blogs (blogs)
- Video Blogs (vlogs)
- Wikis (wiki-wiki, quick)
- Various Instant Messengers
- Usenet
- You Tube


The lecture was an instant success and it has actually turned into a graded essay assignment for my students. I used plenty of real-world examples including how my 10 year-old used Google Docs for his class assignment and then it with his class. I even tweeted about how I was teaching about Social Networking, sort of a circular reference.

The feedback that I received followed the same theme mostly; students were unaware of most of the tools. Of the tools that they did know about, considering their potential value or disruption to business was a new concept.

Tuesday, March 30, 2010

Mapping Agile Success

So, I am wondering if Agile successful adoption levels have been mapped. When I say mapped, I mean mapped in two dimensions, Value Stream Mapping and Momentum Mapping (R. Ryan Nelson and Karen J. Jansen (University of Virginia) MIS Quarterly Executive – September 2009).

I am not a Lean expert, but from what I know, delivering projects via waterfall creates waste, mainly due to "partial work done." Delivering incremental value reduces waste, but by how much? Can this be mapped?

VSM (Coarse-grained)
1. First, VSM the the current waterfall delivery processes, indicating waste points.
2. Next, over time, adopt an Agile methodology.
3. Finally, re-map your delivery methodology under the new Agile processes. Shouldn't the second VSM now show less waste?

Perhaps this is a good place to start: The Art of Lean Software Development: A Practical and Incremental Approach

Granted this is a contrived example, but if Agile delivers value incrementally, before and after VSMs should be able to show the decrease in waste and increase in value.

Understanding value streams is all fine and good, but to me Agile is also very dependent on the team positivity or negativity. How they perceived the progress of the sprint or project can drive how they are open to adopting or adapting to Agile. Emotional Seismographs (Esther Derby) can be used to map "...how people responded to events....and provides clues on where the real juice is for a particular project community..." These seismographs are also know as Momentum Maps.

So between the two mapping methods, we can get an reading on waste-removal and value-added as well as how well our teams are adjusting and implementing Agile.

There is more to come on this topic as I work through the mechanics of these two techniques.

Friday, March 26, 2010

XSS Restrictions - A barrier to UX and eloquent design

So, this is sort of a rant, but here goes. I am working on an E-Commerce punch-out application. For the uninitiated, punch-out is a form of E-commerce where by the user of a procurement system wants to shop for items found in a remote inventory management Internet site. The user initiates an action in their system that "punches-out" of their system and into a shopping experience hosted by the remote system. The user shops in the remote system and then returns their local system with the shopping cart contents, including pricing. Punch-out is based in large part of the CXML standard. It is CXML that is exchanged in these punch-out conversations between each system.

To test our new system, I wrote a small Java web app that uses AJAX to send and receive the CXML to the remote system. Since AJAX using JavaScript, I immediately ran into security issues with XSS (Cross-site-scripting). I know about XSS, but I initially ignored it because this test app is an Intranet only app running on my local Tomcat server. I was wrong to be so cavalier.

I am using IE8, and IE8 (along with other modern browsers) has seen fit to disable XSS by default. After all, XSS is a major security issue. I just don't think that it is a major security issue in my environment and I resent the fact that I can not use it. So I did some digging and it just so happens that I can disable the XSS Filter in IE8 by passing the proper HTTP response header to the web browser, from my Tomcat sever.

response.setHeader("X-XSS-Protection","0");

This code will stop IE8 from preventing the potentially malicious AJAX call and simply alert the browser user of its existence. However, if I try to use SSL then I am right back to where I started as IE8 just seems to ignore my response header in this situation. So, now my AJAX is muted.

I saw AJAX and AJAX-like technologies to be a major positivity to UX (user experience) design in modern web applications. However, unless I am satisfied to only make AJAX calls to my local server, I am doomed.

Tuesday, March 23, 2010

Hope Driven Development

In a recent group conversation at the AgileCoachCamp 2010 NC we were discussing how we convince developers that Agile testing techniques like TDD were good ideas. During this conversation it was apparent that the lack of testing was acceptable to some developers. How could they justify not testing, did they just hope that defects would not materialize? Perhaps they hoped that any defects would be mild. Hope should not be a strategy for delivering value and quality. This quickly led to remarks that the approach of delivering non or under-tested software was akin to HDD (Hope Driven Development).

Hacking Work...

There are too many obstacles preventing us from doing our jobs. We are strapped with archaic and inefficient processes that add little value and really just slow us down. I am not alone in this thinking. During several sessions at the AgileCoachesCamp I heard others trying to make sense of clumsy processes that they had to suffer through just to make management happy. These Agile coaches and trainers were describing having to move through the motions of waterfall to satisfy their managers or customers while they actually execute their projects in true Agile fashion.

Is this what we are reduced to? Is this what I am expected to do? Are there any organizations that get it, really get the idea of Agile and why it does not necessarily need to be prosecuted in waterfall fashion with waterfall processes and artifacts? As a burgeoning Agile professional, I am somewhat disappointed at this prospect.

Last week I read an article in HBR about "Hacking Work." That's what we do when we overcome obstacles to getting our jobs done. I know I have done this; what's more, I feel that I do this more often. I see others do it so often that it almost becomes the new norm. There are unofficial positions at companies filled by people that just get things done, regardless of how.

My issue is that I think "hacking" causes us to use more time and resources than we should have to. It introduces stress. Let's face it, those of us that are embracing Agile or have embraced Agile are doing so because we passionately feel that we can do better...better than we did in the past...better than we were taught. What keeps us pushing in our jobs when their is so much pushing back? How do we change our organizations? I mean, if status-quo was good enough there would not be this movement towards Agile. Doing what we did and getting what we got would be just fine.

According to Dawn Cannan, "There are many techniques for pushing through resistance...". I agree with her when she says, "Change your organization, or change your organization." I am just not sure how much fight I have left in me. I am truly ready to help transition an organization while learning the best ways to go Agile and deliver value regularly and routinely. However, is there an organization out there that is ready to take that journey with me? I hope it is my current organization, but if it is not, I might just "change my organization."

Sunday, March 21, 2010

AgileCoachCamp 2010 NC - An Unconference

This weekend I attended the AgileCoachCamp 2010 in Durham, NC. It was free, I am completely new to this community, and there was no agenda. There was a call for "position papers" that were really just session abstracts. To the best of my knowledge there were no identified speakers before the conference started.

I think I might have been one of just a few Agile newbies at that OpenSpace (what was called an) unconference. I am new to the world of Agile, and I was surrounded not just by Agile practitioners, but in large part by Agile coaches, people coaching others in the best practices of going and using Agile. I first felt out of place, but the OpenSpace format was amazingly simple yet really effective. I joined sessions at the beginning, but later left when I was done providing and receiving value. The entire process seemed so organic in so much as everyone immediately understood it and used it for mutual advantage. The session marketplace concept was intuitive and allowed us to self-organize effectively.

This community of Agile Coaches is something that I plan to follow and participate in as much as I can.

New ScrumMaster....

So, last weekend (3/13-3/14) I attended 2 days of ScrumMaster training in Richmond, VA with Lyssa Adkins and Catherine Louis. This was an AWESOME experience; quite honestly some of the best training I have ever experienced. The following Monday I logged in the Scrum Alliance and took the evaluation to become a CSM - Certified ScrumMaster. I am motivated to become more acquainted with Agile (and Scrum) in an effort to deliver value. I have much to learn, but I am determined to get there. So far, I have seen that the Agile community is eager to help.

Avoiding Tech-sand

Tech-sand (technical quicksand) is my label for technology (hardware, software, knowledge, and/or practices) that keep us from delivering value or progressing in our craft. Often tech-sand is poorly understood yet still implemented technology. It can also simply be outdated legacy technology that is heavily entrenched. If you have any IT experience, you most likely have experienced tech-sand. My blog is about the pursuit to deliver value without creating tech-sand for me or others to deal with. As far as I know, I came up with the tech-sand moniker. That is to say that I searched for related instances on tech-sand so that I did not squat on someone's IP or trademark. If you think I have failed in my attempt to be original, let me know and I will consider your argument.