Methodology Archive

Detecting CSS transitions support using JavaScript

Progressive enhancement is one of the cornerstones of good web design: You build a solid foundation that supports a broad range of different browsers, and then add features either as you detect support for them or in a way that doesn’t interfere with less capable browsers.

One of the awesome new features that’s in recent versions of Safari, Safari Mobile (iPhone browser), Chrome, the Android browser, and in Palm’s webOS is CSS transitions. Transitions work by smoothly interpolating between two states of a CSS property over time. For example, using the simple style rules below, you could have a link gradually change from yellow to red when the user moves the mouse over it:

a {color: yellow; -webkit-transition: color 1s linear;}
a:hover {color: red;}

In a more complicated example, you could use CSS transitions to slide an element off-screen when a replacement element is introduced, like the way that the “pages” slide off-screen when you click through an iPhone’s contacts.

This introduces a problem: What if you’re using JavaScript to add the new element, and you want to remove the old element after it’s off screen, and you need to support multiple browsers?

You need to be able to detect that the browser supports CSS transitions so that you know not to remove the element until it’s done animating, and so that you know that it’s OK to remove the element right away for browsers that don’t support CSS transitions. Here’s how you can detect support using JavaScript:

var cssTransitionsSupported = false;
(function() {
    var div = document.createElement('div');
    div.innerHTML = '<div style="-webkit-transition:color 1s linear;-moz-transition:color 1s linear;"></div>';
    cssTransitionsSupported = (div.firstChild.style.webkitTransition !== undefined) || (div.firstChild.style.MozTransition !== undefined);
    delete div;
})();

The variable cssTransitionsSupported will be set to true when transitions are supported, or to false when transitions are not supported.

You can then either use the webkitTransitionEnd (or mozTransitionEnd) events to detect when the transition is finished, or else immediately perform the action when transitions aren’t supported.

SharePoint Conference 2009 - Day 1

I’m at the SharePoint Conference in Vegas this week. Registration and Exhibit Hall started Sunday night, but sessions officially started Monday. I am tweeting all day during the conference, follow me (@mmdeluna) if you are interested. You can track tweets using #spc09. I will be posting daily summaries. Stay Tuned!

Registration and Exhibit Hall

This year’s conference is SOLD OUT. Compared to last years 3,800 attendees, this year’s 7,400 attendance is a testament to how big SharePoint has been adopted in the enterprise. Registration was pretty well organized and the badges are smart cards that are being scanned (optionally) by vendors for mailing list subscriptions and contests; and are also scanned by event managers for session attendance. Most of the vendors I saw in the Exhibit Hall are from Document Management Services - scanning, annotating, encrypting, converting, etc. And then there are the normal partner vendors: ISVs, SIs, Training, Data Recovery, Content Migration and Professional Services. Having said that - the give aways were a bit lame :)

Keynotes

There were 2 keynotes scheduled on day one, which lasted the whole morning. You would think that it wasn’t smart to have 7,400 attendees to sit still for almost 3 hours but Kudos to the presentation team, they pulled it off. Steve Ballmer did his FIRST SharePoint Conference keynote, one of the last few things Bill used to do that he hasn’t done yet. Tony Rizzo and the others did a great job on the demos doing enough to whet the appetite of all the geeks (like me) in the room. Here are the items that “struck” me during the keynotes. I am hoping to attend some of the sessions that show these in action.

  • There’s a HUGE emphasis on SharePoint and Internet facing sites. So much so that MS has renamed their products and services to emphasize this. Expect licensing prices to reflect this change

    • Intranet Products: MS Sharepoint Foundation 2010 (formerly known as WSS), MS SharePoint Server 2010, MS Fast Search Server 2010 for SharePoint

    • Internet Products: MS SharePoint Server 2010 For Internet Sites (STD, ENT editions) and MS Fast Search Server 2010 for Interet Business

  • Oh yeah - Steve Ballmer features Kraft Foods on his keynote - Nice! I wonder if this will drive attendance on our session (Wednesday, 1021 @ 1:15 pm)

  • SharePoint 2010 goes on public beta in November - don’t forget to download

  • SharePoint Online (SharePoint in the Cloud)

  • SharePoint Workspaces (Groove Makeover)

  • SharePoint Composites - I need to know more about this.  Interesting.

  • Developer tool integration in VS 2010. One-Click build, deploy and debug >> AWESOME!

  • Powershell Scripting - say goodbye to STSADM

  • New External Content Type / BCS (formerly BDC) - opens up possibilities with integration to backend systems. I’m very excited about this

  • SharePoint Service Applications - say goodbye to SSP

  • Improved List Performance and Caching - taxonomy navigation (tags and labels)

  • New and Improved Central and Site Admin UI - it’s AJAX yo!

  • Built in Spell Checker - it’s the little things…

  • Our PLDs and PLAs will like the improved support on standards specially WCAG

  • Some Social Computing features out of the box - ratings, notes/comments, blogs, wall (My Network)

  • VS 2010, SharePoint 2010 running on Windows 7 - 64 bit mobile development machine. yay!

Steve made a point by saying he didn’t think there’s any software out there that competes directly with SharePoint. Jeff Teper implies the same when he compares SharePoint to a Swiss Army Knife. Both videos are available online for viewing at the SPC09 website.

The list just goes on and on! There are way too many things to get excited about in 2010. I am hoping to get into the details of a lot of these in the upcoming sessions.

Day 1 Sessions

For the breakout sessions on day 1, I selected a couple of SharePoint overview topics.  One was SharePoint 2010 Overview and What’s New and more specifically for developers, Visual Studio 2010 SharePoint Developent Tools overview.  These sessions give me enough information on the overall features available so I can make a more informed selection in the coming days.

SXSW to Go: Creating Razorfish’s iPhone Guide to Austin (Part 3)

Optimization

As the Razorfish Guide to SXSW became more fully developed, we started to look at key areas where we could make performance gains and either actually speed up the site or simply make the site appear to load more quickly. (Check out part 1 of our story to see how requirements for the site were gathered and part 2 to learn about how the site was architected)

Cache it good

One of the earliest steps we took to optimize the application was to use server-side caching. ASP.NET allows you to cache just about anything on the server for quick retrieval. Taking advantage of this feature means that you can avoid extra trips to the database, requests to other services, and repeating other slow or resource-intensive operations. The Razorfish.Web library’s abstraction makes ASP.NET’s caching easy to use, and we quickly added it both to all database calls and to store most MVC models.

Zip it up

A second key optimization was to add GZIP compression to our assets. GZIP compression shrinks the size of most text-based files (like HTML or JSON) down to almost nothing, and makes a huge difference in the amount of time it takes for a slow mobile client to download a response. IIS7 has this feature built in, but we were running the site off of an IIS6 server. Happily, Razorfish.Web.Mvc has an action filter included that supports compressing your responses with GZIP.

Strip out that whitespace

Next, we used Razorfish.Web’s dynamic JavaScript and CSS compression to strip out unnecessary characters and to compact things like variable names. Minifying your scripts and stylesheets reduces their file size dramatically. One of the nice features of Razorfish.Web is that it also can combine multiple files together, reducing the overall number of requests that a client has to make. All of this happens dynamically, so you’re free to work on your files in uncompressed form, and you don’t have to worry about going out of your way to compact and combine files.

Sprites

Another key optimization was combing all of the image assets into a single file, and using CSS background positioning to choose what image to display. Doing this not only cuts the number of requests that have to be made (from 10 to 1, in our case), but also cuts the overall amount of data that needs to be loaded. Each file has its own overhead, and you can cut that overhead by combining them.

Keep it in-line

As we started testing on the actual iPhone, we still weren’t satisfied with the page’s load time. There was a significant delay between the page loading and the scripts loading over the slow EDGE network. This defeated the purpose of the JSON navigation because the user was apt to click a link before the scripts had a chance to load and execute – meaning that they’d have to load a new HTML page. If the scripts were delivered in-line with the page, there would be no additional request, and they could execute right away. Because the successive content was to be loaded with JSON, concerns about caching the scripts and styles separately from the page were moot. We set about extending Razorfish.Web so that it could now insert the combined and compressed contents of script and style files directly into the page. By moving the scripts and styles in-line, we shaved off about 50% of our load time, and the scripts were now executing quickly enough that the JSON navigation mattered again.

Smoke and mirrors

A final touch was to take advantage of Safari Mobile’s CSS animation capabilities. The iPhone supports hardware-accelerated CSS transitions and animations, meaning fast and reliable animation for your pages. We added a yellow-glow effect to buttons when pressed. The glow was not only visually appealing, but its gradual appearance also helped to distract the user for the duration of the load time of the successive content.

Success

The team managed to pull the web application together in time for launch, and the guide was a smashing success. Over the course of SXSW, sxsw.razorfish.com was visited by 2,806 people who spent an average of 10 minutes each on the site, typically viewed about 8 pages, and often came back for second and third visits. The site attracted a large amount of buzz on Twitter and was praised as the go-to guide for the conference.

When designing for mobile, speed is key. All of the components of the site, including the design, need to work together to connect the user to the content as quickly and as efficiently as possible. In such a hyper-focused environment, the user experience, graphic design, and technology need to be unified in supporting a shared goal.

By producing a responsive, reliable, easy-to-use, to-the-point, and locally-flavored guide to the city, the team succeeded in creating a memorable and positive impression of Razorfish at SXSW.

SXSW to Go: Creating Razorfish's iPhone Guide to Austin (Part 2)

Design and Development

Up against a tight deadline, our small team was working fast and furious to create the Razorfish mobile guide to Austin in time for the SXSW Interactive conference. With our technologies determined and all eyes on the iPhone, we set out to bring the guide to life. (Check out part 1 of our story to find out more about how we set requirements and chose technologies)

The meat and potatoes

The guide is content-driven, and we knew that the site wouldn’t be given a second look without strong content to back it up. Our team decided on structuring the site as nesting categories with a design reminiscent of the iPhone’s Contacts application, and breadcrumb navigation (as is found in the iTunes Store).

With the flow determined, the creative director started developing the content categories and soliciting suggestions from the office about their favorite Austin haunts. She enlisted an information architect to assist with writing the site’s content, and they churned out the site’s content over the next several weeks.

Simultaneously, one of our presentation layer developers began work on graphic design, another focused on hosting and infrastructure, and I began working on database and application architecture.

Getting around

The first major issue we tackled when working on the front-end of the site was navigation. We had identified several features that were essential for the guide to perform satisfactorily:

  • Rather than load a new page, new “pages” of data should be loaded as JSON, and then have their HTML constructed on the client-side. JSON is a very compact way of moving data and is easy to support using JavaScript’s eval function. By using JSON to communicate between the server and the content, we avoided the performance hits of loading a larger request, rendering a fresh page, running scripts again, and checking cached components against the server. Those performance issues are often negligible on a PC with fast internet connection and plenty of memory, but on a mobile device, every byte and every request makes a noticeable impact.

  • Data need to be cached on the client whenever possible, and making repeat requests to the server for the same data should be avoided.

  • The browser’s history buttons (Back and Forward) must work, and ideally work without making new requests to the server.

  • The site must be navigable in browsers that cannot properly support AJAX.

To satisfy both the first and last requirements, we were going to have to effectively have two versions of every page running in parallel (a JSON version for AJAX-ready clients and an HTML version for others). Luckily, the MVC framework makes this easy on the server. By properly defining our data model classes, we could either send the model object to a view page for each of the data points to be plugged in and rendered as HTML, or we could directly serialize the model to JSON and send it to the client. To make it easy for the client script to select the right version, all of the JSON page URLs were made identical to the HTML URLs, except with “/Ajax” pre-pended. With this URL scheme in place, JavaScript could simply intercept all hyperlinks on a page, add “/Ajax” to the location, and load a JSON version of the content instead of a whole new page.

To determine when to use JSON and when to use HTML, we did some simple capabilities testing. If window.XMLHttpRequest, the W3C standard AJAX class, exists, then it was safe to use JSON navigation on the client. Incidentally, Internet Explorer and many mobile browsers do not support this object, which greatly simplified later development.

Several JavaScript classes were created to support page rendering: A history class to manage caching and the forward/back buttons, a base page class that would take care of rendering JSON into HTML, and an application class that would manage the interactions between the pages, the history, and the user. A handful of page types were identified, and subclasses were created from the base page for each specialized layout and different data model.

A method called BrowseTo was defined on the application class that would handle all actions associated with the user clicking a link or going to a new URL. _BrowseTo** **_did several things:

  1. Identify the JSON URL (dropping the “http” and the domain, and adding “/Ajax”)

  2. Determining what page class to use to render the JSON data

  3. Checking if there’s already cached data for the URL, and making a request to get the data if there’s not

  4. Instructing the page to render

  5. Instructing the history to add the new page to the list of visited sites

  6. Caching the JSON data from the response in memory if a new request was made

Due to time constraints, we opted to use “dirty-caching” for JSON data. When dirty-caching, you’re storing the JSON object in memory under a key. In this case, the key was the URL. There are a few downsides to this method:

  • Storage isn’t persistent, and only lasts as long as the browser is open on that page

  • You’re using up memory, not disk space, to store data, which could eventually overwhelm the client and cause it to crash

Because the size of the data that we were caching was very small, and dirty-caching is both very fast to implement and universally supported, we used it to temporarily story data. Given more time, we would have taken advantage of the iPhone’s HTML 5 local storage features. On any browser that supports this feature, you can store data in a database on the client. Many web applications take advantage of this feature to provide persistent offline access to content. The downside is that the HTML 5 local storage API is somewhat tricky to implement properly and is currently confined to a select few browsers.

A little bit of history

Forward and back button support comes naturally when you’re loading new pages, but for the JSON version of the site, we implemented a solution based on URL hashes (the # data at the end of a URL). Most browsers will include URL hashes as a state that can be navigated to using the forward and back buttons. By regularly scanning the URL hash, you can update your page when there’s a change and simulate forward/back button support. Our history class was designed to add the “/Ajax” path as the URL hash, making it easy to determine what JSON data to load when the hash changed.

With our navigation system intact, and our creative team churning out new content for the site, we took a step back and started to look at performance. Check back next week, and see how we fine tuned the site to work quickly and responsively on the iPhone.

SXSW to Go: Creating Razorfish’s iPhone Guide to Austin (Part 1)

Once a year, the internet comes to visit Austin, Texas at the South by Southwest Interactive (SXSWi) conference, and, for 2009, the Razorfish Austin office was determined to leave an impression. We ended up making close to 3,000 impressions.

Industry leaders and the web avante-garde converge on Austin for one weekend each year to learn, network, and see the cutting edge of interactive experience and technology. And also to take advantage of any number of open bars. It is a conference, after all.

The Razorfish Austin office typically plays host to a networking event and takes out ad space in the conference guidebook. In 2009, confronted with shrinking budgets in the wake of the global financial crisis, we knew we had to set ourselves apart and do it on the cheap.

iPhone Apps were on everyone’s mind (and would be in every conference-attendee’s pocket), and would prove to be the perfect venue to showcase Razorfish’s skill and Austin’s personality. In late January 2009, Three presentation layer developers and a creative director formed a small team and set out to build an iPhone-ready guide to Austin.

Over this series of articles, I’ll be diving into how we created the Razorfish Guide to SXSW iPhone-optimized web site. Part 1 will deal with requirements gathering and technology choices, part 2 will cover design and development, and part 3 will talk about what we did to optimize the mobile experience.

Requirements

The first thing we did as a team was to sit down and discuss what the guide had to be. Going in, we knew we wanted it to be on the iPhone because of the cachet associated with the device. We also knew that we had a very condensed timeline to work in – we needed to launch in 5-6 weeks, and we all had other projects that required our focus.

To App, or not to App?

One of the first decisions we made was to approach the guide as an iPhone Web App, rather than building an Objective-C compiled application. We knew that we didn’t have a technical resource who already knew Objective-C available and that we would have trouble getting approval and into the App Store in time for our launch. Most importantly, we needed as many people as possible to be able to use the guide, and didn’t have time to create different versions for different devices.

iPhone Web Applications offer not only a way to leverage the iPhone’s impressive graphical capabilities, thanks to Safari mobile’s excellent standards and future CSS support, but also a way to reach other platforms using progressive enhancement (testing for a feature, and then enhancing the experience for clients that support that feature).

Mobile madness

There are dozens, if not hundreds, of mobile browsers out there, with wildly differing interpretations of CSS and JavaScript. Check out Peter-Paul Koch’s CSS and JavaScript mobile compatibility tables if you need convincing. Supporting multiple mobile devices is no cakewalk, especially since many of them have incorrect or misleading user agents.

The iPhone was our target, and some mobile browsers, such as many versions of Opera Mobile, also have relatively good standards support, but what about IE Mobile or Blackberry?

We quickly came to the conclusion that, because of the condensed timeline, we should test in and support Safari Mobile only, however, that the site also needs to be fully usable with no CSS or JavaScript whatsoever. By ensuring this baseline level of functionality, we could be certain that even the Blackberry browser could at least limp across the finish line.

Back to the desktop

Along with choosing mobile browsing platforms to support, we also had to decide for which desktop browsers to design the site. Ordinarily, desktop compatibility testing is dominated by Internet Explorer 6, but this site was geared towards web designers and developers.

That means more people would be visiting the site using Chrome than would be IE6.

IE6 was swiftly kicked to the curb, and we settled on fully supporting Firefox 3, Safari 3 and Chrome, with basic support for Internet Explorer 7. Safari and Chrome support came almost for free, because the two render almost identically to iPhone’s Safari Mobile.

Site be nimble, site be quick

Supporting mobile devices supporting weak signals, slow connections, small screens, bite-sized memory, and users who are on the go. There are a number of factors conspiring against any mobile website, and we knew that we would have to eke every last bit of performance out in order to overcome them.

Limit the chatter

Client interaction with the server not only increases design complexity, but it also increases the size and number of requests. There were several key factors that made us decide to keep forms and complex interactivity out of the site:

  • Applications that use forms have to validate the data, and guard against attacks. This can slow down the experience, and also would require a more in-depth security review.

  • POST requests are slow. Data-heavy responses are slow. Increasing the number of requests involved in typical usage puts a heavier burden on the server and delays the user in getting from point A to point B.

  • Sites that can be customized or that allow the user to log in typically can’t cache data as efficiently, because page data is often sensitive to the user.

To make the site run quickly, launch on time, and be successful in its goals, the application would be focused on being the best guide it could be, and not on integrating your Twitter account and kitchen sink.

Sell the brand

Lastly, the guide had to make Razorfish look good and leave a strong impression of who we are and what we’re all about. If the guide was as informative and fast and easy to use as can be, but didn’t sell our brand, it would be a failure.

Technologies

Based on the requirements we gathered, the team picked familiar development libraries and languages to work with.

XHTML, CSS and JavaScript

These languages should come as no surprise, as they’re integral to all web applications. An important decision that we did make, however, was that no JavaScript or CSS frameworks should be used.

For desktop development, our industry has become increasingly reliant on JavaScript frameworks to smooth out cross-browser wrinkles and speed up our work. Generally, JavaScript frameworks excel at meeting both of those goals.

There are a couple problems when considering a JavaScript framework for mobile development:

  • Frameworks add a lot of bulk to the page. 54 KB for jQuery 1.3 isn’t much on the desktop, where fast internet connections are common, but it’s painful over 2G wireless connections used by many mobile phones (the first iPhone model included).

  • When you’re targeting a single platform (or a standards-compliant platform), a lot of the framework’s code is going to go to waste. Much of the code in JavaScript libraries is for abstracting cross-browser compatibility issues.

  • When you’re targeting multiple mobile platforms, most frameworks aren’t built with mobile in mind, and may be unable to perform properly regardless.

  • iPhone doesn’t cache components that are over 25 KB in size. (Unfortunately, this is when the component is decompressed, so it doesn’t matter if the component is under 25 KB when GZIP compression is used.)

  • The framework’s code has to be executed on the client in order to initialize all of the framework’s components. On slower clients, such as mobile devices, this is a longer delay than you might think, and many of those features probably won’t be used on the site.

In the future, JavaScript frameworks may overcome these challenges, but we resigned ourselves to starting from scratch for this project.

CSS frameworks were out of the question for many of the same reasons.

ASP.NET MVC

The ASP.NET MVC Framework was chosen as our server-side technology primarily because of the team’s familiarity with it. Having just recently used the technology on other projects, it was still fresh in our minds. The MVC framework allows for quick, clean and very functional design that you have a great deal of control over.

Razorfish.Web

We elected to use our internally-developed .NET library that’s specialized for use on web projects. Razorfish.Web has a number of features that made it indispensible for this project, such as dynamic CSS and JavaScript compression. As I’ll cover later, we extended the library while building the guide to push optimization even further.

SQL Server

Microsoft’s database engine was the natural choice to go along with ASP.NET MVC. We used LINQ to SQL to easily communicate with the database from the web server.

With our tools selected, we were ready to start building the site. Come back for part 2 to learn about some key design and development decisions that went into making sxsw.razorfish.com.

agile and pair programming

One of my favorite topics in agile and iterative development is pair programming.The question is can we make it happen more and do we want to try it more? I’ve typically seen it on the smaller and more isolated projects. It’s a fascinating concept and the research, while minimal that I have found, tend to say two developers get more high-quality work done than one independently.

I also found it interesting that it’s a core tenant of education in some circles today. When my wife was getting her master’s in education, pair learning was one of the approaches she was taught. Often it’s three or four, but two works. All her classrooms are broken into small groups and I guess there’s lots of educational research that backs up the fact that students learn more working in small groups than alone. I’ll ask her for some research links.

I ran across an Distributed Agile post today that dug up some more research backing up pair programming. Here’s what the post had to say

“Pairing is the most powerful tool they’ve ever had. Skills don’t matter as much as collaboration and asking questions. Goal for new hires is to get their partner hired. Airlines pair pilots… Lorie Williams at the University of North Carolina did an experiment and found that the paired team produced 15% less lines of code with much better quality”

Reblog this post [with Zemanta]

Leveraging Model Driven Development

[caption id=“attachment_306” align=“alignright” width=“212” caption=“Project Triangle”]Project Triangle[/caption]

Achieving efficiency in the software development process is one of the key motivators every team should strive for. Efficiency can be measured in a variety of ways. The most obvious measurements are cost, project timeline, and the feature set that can be implemented given the first two. In a sense, it boils down to the old project triangle (remember: pick any two of the criteria).

In essence, there is a trade-off between quality, timeline, and cost. For example, reducing the timeline at equal costs reduces quality just as implementing at a faster pace reduces quality. Yet I argue that the triangle approach is not necessarily valid anymore. Traditional development processes have clearly shown that just enhancing the timeline on a project to put special care into the design does not actually lead to higher quality software – quite the contrary.

Yet more dimensions are at play. The number of defects (“bugs”) found in a particular software directly translate into cost and time, especially when found late in development cycle, creating a dependency between testing quality, time, and cost. Inefficient software design increases the cost of introducing new functionality as requirements change and a lack of refactoring capabilities sooner or later lead to the need for a full re-development. The problems are amplified when the software spans multiple independent subsystems, which is often the case in modern web architectures which span across content management systems, web services, search engines, commerce engines, custom web applications, etc.

Agile development methodologies have tackled many of these problems in great detail through test-driven development (TDD) and time-boxed iterative release cycles. This article discusses a number of tactics you can deploy in addition to what you find in your agile toolkit: To speed up development and tackle complex problems with smaller teams in less time leveraging the key ideas of Model-Driven-Development (MDD).

Leaning on MDD

Model-Driven-Development (MDD) is a rather interesting software development paradigm which puts the modeling aspect of software engineering at the center of the development process.

[caption id=“attachment_313” align=“alignright” width=“254” caption=“MDD Overview”]MDD Overview[/caption]

The most popular notion of MDD is the Model-Driven-Architecture standard by the Object Management Group (OMG). MDA is based on a variety of modeling artifacts. At top is a platform-independent model (PIM) which only captures the business requirements using appropriate domain-specific language. This model is then translated into any number of platform-specific models (PSM) using a platform definition model (PDM) for each platform. In essence, this is equivalent to modeling your software in a very high-level (business specific way) and then using a translator such as a code generator (the PDM) to convert the model into code (the platform specific model). Given the same business model, the software can automatically be built using C#, Java, and PHP given the correct translation routines.

MDA in theory has a number of advantages to traditional coding:

  • It obviously appeals to the business owner who can finally re-use the conceptual business model across technology trends, i.e. re-implementing the solution using new technologies does not require a complete overhaul but is simply a matter of switching technologies. Numerous companies specialize in MDA and even rapid-prototyping tools exist which integrate agile development methodologies with MDA. Instead of developing software in iterations, the model is developed iteratively and can then be generated into executable code.

  • When using code generation frameworks, such as the open source tool AndroMDA, one can quickly build applications using existing code generators. A simple UML domain diagram can immediately be translated into Spring MVC controllers, domain objects, Hibernate mappings, and much more.

  • When the software spans multiple sub-systems, MDA nicely enforces the correct translation of the model across different technologies used in each one of these systems. While I prefer writing generic code to duplicating code via code generation, this isn’t always feasible (e.g. for XML configuration files or TeamSite CMS data capture templates). In MDD changes to the model can instantly be translated into multiple code artifacts using different technologies by the push of a button.

Yet I also see a number of serious issues with the OMG’s vision:

  • As an “agilist” at heart I strongly oppose the idea of spending excessive time modeling software in great detail such as highly granular UML diagrams. Software is meant to be code, not a myriad of UML diagrams which are modeled without an in-depth understanding of the features and limitations of the underlying frameworks. I value the use of UML as a pictorial language, especially when illustrating concepts either on a white board or in documentation. But not when used in a strong forward-engineering paradigm.

  • MDA reduces the application of a particular technology or framework to a simply technicality, i.e. the creation of a platform definition model. Yet building applications efficiently heavily relies on the capabilities and limitation of the underlying frameworks.

  • Code generation is equivalent to duplicating a code template using the model as an input. However, I generally prefer writing generic, re-usable code to unnecessary duplication. The benefits of generic code are obvious, not only is the application smaller, but debugging and maintaining the code is by far easier. Generated code would force you to debug the same piece of logic in many places of the application and fixing it requires changing the code generator templates and ultimately re-generating the entire application.

  • Building a platform dependent model, i.e. the code generator, for an entire application can be a huge undertaking. On the upside, many vendors and open source technologies, such as AndroMDA, ship with a variety of pre-built cartridges. However, by using existing code-generators one reduces implementation flexibility as well as maintainability of the application. Debugging and fixing issues in these pre-built code generators can be tedious and easily be overwritten by the next release of the generator. Further, generic code generators tend to be quite complex due to the fact that they have to be very generic.

  • When building web applications, I usually like to encourage my teams to push the boundaries and leverage the latest technologies available. Using existing code generation frameworks obviously won’t leverage the bleeding edge of technology, forcing you to write your own.

Leveraging the Key Tenets of MDD

While I argue that in its pure form, that being the notion of building an entire application using this paradigm, MDD is not my first choice, I would also argue that it has an obvious allure to it. Writing generic code is not always an option as all modern frameworks require configuration, plumbing code, mapping directives, etc. This is exactly where the code generation aspect comes to fruition. Given a central domain model, many artifacts surrounding a domain object can be automatically generated.

A major objection I am confronted with often is whether this approach lacks flexibility as the code is generated according to the same pattern every time. My argument is that this is actually an advantage for the majority of every application. Of course the code generation framework needs to be able to handle special situations where the generic functionality needs to be extended.

Let’s consider this by an example. A software team is integrating an XML-based content management system with a web application. The CMS team is responsible for defining the content input forms in the CMS which are used by the end user to create the XML. The application team writes a parsing layer which parses the XML into domain objects and a web application on top of it. After the teams agree on a content model, i.e. the structure of the XML files, both teams can start implementing all necessary coding artifacts.

[caption id=“attachment_309” align=“aligncenter” width=“259” caption=“Sample Application”]Sample Application[/caption]

However, since all artifacts are developed manually, when integrating the pieces, the teams will encounter a number of bugs which are a result of the two separate systems relying on the same underlying domain model. Further, different bugs are most likely to be found each one of the content forms and the associated parsing layer because different developers make different mistakes.

Consistency may be another issue. Especially when multiple developers are working on the individual functionality, each usually adds their own spin to the code. Some of date fields in the CMS forms may have calendar buttons next to date fields, some may not. Some developers might use camel case, another one may not.

Of course both of these issues can be addressed by establishing sound coding conventions as well as doing impeccable up-front design of all the sub-systems. But reality shows this is rarely the case. Especially when reacting to changes during the development cycle, such as the web application noticing that they need additional fields in the CMS, the original design efforts are often neglected.

[caption id=“attachment_311” align=“aligncenter” width=“325” caption=“MDD with Code Generator”]MDD with Code Generator[/caption]

Consider the alternative which is more aligned with the MDD paradigm. The teams agree on a domain model and then build a vertical slice, i.e. a functional prototype of the system through all defined layers. Then, using this prototype, the teams build a code generator which takes the domain model as an input and automatically generates the CMS forms and application-level parsing layer for the resulting XML files. The domain model is then fed into the code generator and the application is automatically generated. The code generator automatically enforces consistency. If any bugs where to be encountered which are a result of the coupling of the two systems, the code generator would have to be changed and the application re-generated.

In addition to the maintenance and consistency advantages, the teams also saved time. In the traditional approach, each time had to manually build the application logic for each entity in the domain model in each of the participating subsystems. In the MDD scenario, the teams built a vertical slice prototype and then translated that into a code generator – which automatically generated the application.

Alternatives

Especially when building web applications a strong alternative to MDD is the use of Rails-like frameworks, such as Ruby on Rails, Grails, Monorail, and many others. The underlying core ideas are aligned with what I consider the key advantages of MDD:

  • Don’t repeat yourself (DRY): Instead of repeating yourself, write code once and then have the framework take care of creating the plumbing code. This is typically done under the hood by the framework.

  • Conventions over Configuration (CoC): Rather than having every aspect of the application be built differently, establish sound conventions and use these throughout the software to ensure consistency and eliminate unnecessary (and unmaintainable) bloated configuration files.

In essence, these frameworks try to solve the same underlying problem. Yet Rails frameworks focus on building web applications within the same technology stack. For simple (web) projects which are easily contained into one logical application, not spreading across multiple software systems, any Rails framework is an excellent way of building an application quickly and iteratively. Once an architecture spans multiple technologies, frameworks, or requires custom coding using proprietary products (such as a CMS), MDD proves to be the big brother of Rails-like frameworks.

Conclusion

I have used both the Rails and the MDD approach throughout my career. I have introduced the light-weight MDD approach in a number of recent projects at Razorfish, which led to us building a jumpstart kit that in its first iteration lets us quickly bootstrap projects with Interwoven TeamSite and .NET as an application platform. This has not only saved us a lot of time, but also a lot of headaches and long nights of debugging code. We are able to quickly react to changes during the development cycle. Changes to the domain model can be made within the matter of the short time it takes to open a UML editor and re-run the code generator.

I consequently see MDD as a vital part of agile enterprise development and a complementing technology which picks up where a Rails framework hits its limits.

Links

The Back of a Napkin...

Often times we spend significant time caught up with diagrams and illustrations of our enterprise technology architectures, patterns, and concepts. Most recently a couple of us have been spending time on some diagrams to help illustrate the concept of a content management bus to help a large organization better share and tag their content. It’s definitely an critical and fun skill. A former collegue, Dan Roam, has just published a book, The Back of the Napkin: Solving Problems and Selling Ideas with Pictures. Having worked with Dan for years, I can tell you his ability to use visual thinking to help communicate complex business and technology concepts is just incredible. I just ordered a couple of copies and I am eagerly awaiting them from Amazon. Especially since I’d love some ideas on how to better help folks think about this content services bus we are working on:)…