Cms Archive

Serverless Continuous Delivery for AEM on AWS

As mentioned in previous posts, I have been working on solutions that involve Adobe Experience Manager (AEM) and Amazon Web Services (AWS) to deliver digital experiences to consumers for the past few years. This was partly through custom implementations for our clients, and sometimes through Fluent, our turnkey digital marketing platform that combines AEM, AWS and Razorfish Managed Services and DevOps. I highlighted some of our ideas and best practices around managing a platform like that in a white paper created together with the AWS team.

One of the rules we apply in our work is striving for 100% automation in our deployment process. As I mention in the white paper, we leverage Jenkins on all our work to enable this. Combined with the fact that both AEM and AWS provide API access to various services and functionality, we have been very successful in automating this process.

That said, configuration of Jenkins and configuration of specific jobs is often a somewhat manual task. In addition, Jenkins (and the builds that are being executed) runs on its own EC2 instance(s), and therefore requires monitoring and incurring an ongoing cost. Especially in cases where deployments become less frequent, it would be ideal to utilize an on-demand & serverless deployment infrastructure.

AWS released CodePipeline earlier, and combined with the new AWS CodeBuild announcement, we can now look at replacing Jenkins by this set of AWS services, eliminating the need for a running AWS server that is a potential single point of failure. And with a unified architecture leveraging AWS tools, we have a solution that also integrates well with the overall AWS ecosystem, providing additional other benefits around artifact management and deployment.

I’ll describe how AWS Codepipeline can be used in the next section.

Architecture

To describe the solution, a simple code pipeline is described, patterned after the WordPress example in the AWS documentation. It omits some critical additional steps, such a performance, security, regression, visual and functional tests, which I will touch upon in a subsequent post.

The we-retail sample application was used, a reference AEM application for a retail site developed by the Adobe team.

AWS CodePipeline Flow

As shown in the diagram, the CodePipeline connects to Github for the latest source code in the first step. It also retrieves configuration information and CloudFormation templates from AWS S3. If any updates to Github or to the S3 objects are made, CodePipeline will be triggered to rerun.

The next step is to compile the application. AWS CodeBuild is a perfect fit for this task, and a basic profile can be added to perform this compilation.

If compilation is successful, the next step is to create the staging infrastructure. We leverage our CloudFormation templates to instantiate this environment, and deploy the application into this. A manual approval step is then inserted to into the flow to ensure manual review and approval. Unfortunately, at this point it is not possible yet to include the CloudFormation output (the staging URL) into the approval message.

Once approval is gained, the staging infrastructure can be removed, and the application can be deployed to production. As production is an already existing environment, we will deploy the application using the standard API AEM provides for this. To do so, a Lambda job was created that mimics the set of curl commands that can be used to deploy a package and install it.

Conclusion

With AWS CodePipeline and CodeBuild, we can now create a fully automated serverless deployment platform.

Jenkins is still a very powerful solution. At this point, Jenkins also has a large number of plugins that help automate the various tasks. However, many of these can fairly easily be migrated to AWS CodePipeline. AWS CodePipeline already provides built in support for many different services, including BlazeMeter and HPE StormRunner. It also supports Jenkins integration itself, and for any other services, AWS Lambda can fill that gap. For example, in our original standard AEM on AWS setups, a large number of tasks (deployment, content transfer, etc) were performed by Jenkins invoking curl scripts, which can easily be implemented using AWS Lambda as shown in the example above.

Obviously, the example above is simplified. Additional steps for testing, content deployment or migration and code review will need to be incorporated. I will describe some of this in a next post. That said, creating this simple unified architecture is very powerful.

In addition, by leveraging the CodePipeline and CodeBuild architecture, the deployment process itself can be moved to a CloudFormation template, making it possible to instantiate a system with multiple environments with automated deployments with little more than a click of button, getting us closer to 100% automation.

written by: Martin Jacobs (GVP, Technology)

How Machine Learning Can Transform Content Management- Part II

In a previous post, I highlighted how machine learning can be applied to content management. In that article, I described how content analysis could identify social, emotional and language tone of an article. This informs a content author further on his writing style, and helps the author understand which style resonates most with his readers.

At the last AWS re:Invent conference, Amazon announced a new set of Artificial Intelligence Services, including natural language understanding (NLU), automatic speech recognition (ASR), visual search, image recognition and text-to-speech (TTS).

In previous posts, I discussed how the Amazon visual search and image recognition services could benefit digital asset management. This post will highlight the use of some of the other services in a content management scenario.

Text to Speech

From a content perspective, text to speech is a potential interesting addition to a content management system. One reason could be to assist visitors with vision problems. Screen readers can already perform many of these tasks, but leveraging the text to speech service provides publishers more control over the quality and tone of the speech.

Another use case is around story telling. One of the current trends is to create rich immersive stories, with extensive photography and visuals. This article from the New York Times describing the climb of El Capitan in Yosemite is a good example. With the new text to speech functionality, the article can easily be transformed into a self playing presentation with voice without any additional manual efforts, such as recruiting voice talent and the recording process.

Amazon Polly

Amazon Polly (the Amazon AI text to speech service) turns text into lifelike speech. It uses advanced deep learning technologies to synthesize speech that sounds like a human voice. Polly includes 47 lifelike voices that spread across 24 languages. Polly is service that can be used in real-time scenarios. It can used to retrieve a standard audio file (such as MP3) that can be stored and used at a later point. The lack of restrictions on storage and reuse of voice output makes it a great option for use in a content management system.

Polly and Adobe AEM Content Fragments

In our proof of concept, the Amazon AI text to speech service was applied to Content Fragments in Adobe AEM.

Content Fragment

As shown above, Content Fragments allow you to create channel-neutral content with (possibly channel-specific) variations. You can then use these fragments when creating pages. When creating a page, the content fragment can be broken up into separate paragraphs, and additional assets can be added at each paragraph break.

After creating a Content Fragment, the solution passes the content to Amazon Polly for retrieving the audio fragments with spoken content. It will generate a few files, one for the complete content fragment (main.mp3), and a set of files broken up by paragraph (main_1.mp3, main_2.mp3, main_3.mp3). The audio files are stored as associated content for the master Content Fragment.

Results

When authoring a page that uses this Content Fragment, the audio fragments are visible in the sidebar, and can be added to the page if needed. With this capability in place, developing a custom story telling AEM component to support scenarios like the New York Times article becomes relatively simple.

If you want to explore the code, it is posted on Github as part of the https://github.com/razorfish/contentintelligence repository.

Conclusion

This post highlighted how an old problem can now be addressed in a new way. The text to speech service will make creating more immersive experiences easier and more cost effective. Amazon Polly’s support for 24 languages opens up new possibilities as well. Besides the 2 examples mentioned in this post, it could also support scenarios such as interactive kiosks, museum tour guides and other IOT, multichannel or mobile experiences.

With these turnkey machine-learning services in place, creative and innovative thinking is critical to successfully solve challenges in new ways.

written by: Martin Jacobs (GVP, Technology)

How Machine Learning Can Transform Digital Asset Management - SmartCrop

In previous articles I discussed the opportunities for machine learning in digital asset management (DAM), and, as a proof of concept, integrated a DAM solution (Adobe AEM DAM with with various AI/ML solutions from Amazon, IBM, Google and Microsoft.

The primary use case for that proof of concept was around auto-tagging assets in a digital asset management solution. Better metadata makes it easier for authors, editors, and other users of the DAM to search for assets, and in some scenarios, the DAM can providing asset recommendations to content authors based on metadata. For example, it’s often important to have a diverse mix of people portrayed on your site. With gender, age, and other metadata attributes as part of the image, diversity can be enforced using asset recommendation or asset usage reports within a DAM or content management system.

Besides object recognition, the various vendors also provide API’s for facial analysis. Amazon AI for example provides a face analysis API, and this post will show how we can tackle a different use case with that service.

SmartCrop

One common use case is the need for one image to be re-used at different sizes. A good example is the need for a small sized profile picture of the CEO for a company overview page as well as a larger version of that picture in the detailed bio page.

Challenge screenshot

Often, cropping at the center of an image often works fine, but it can also result in the wrong area being cropped. Resizing often distorts a picture, and ends up incorporating many irrelevant areas. A number of solutions are out there to deal with this problem, ranging from open source to proprietary vendors. All of them are leveraging different detection algorithms for identifying the area of interest in an image.

Results

Leveraging the Amazon Rekognition Face Analysis API, we can now solve this problem in a very simple way. Using the API, a bounding box for the face can be retrieved, indicating the boundaries of a face in that picture. And with that bounding box, the right area for cropping can be identified. After cropping, any additional resizing can be done with the most relevant area of the image to ensure the image is at the requested size.

Solution screenshot

The result is shown in the image above. The image to the right is the result of leveraging the SmartCrop functionality based on the Face Analysis API. As you can see, it is a significant improvement over the other options. Improvements to this SmartCrop could be done by adding additional margin, or incorporating some of the additional elements retrieved by the Face Analysis API.

The code for this proof of concept is posted on the Razorfish Github account, as part of the https://github.com/razorfish/contentintelligence repository. Obviously, in a real production scenario, additional optimizations should be performed to this proof of concept for overall performance reasons. The Amazon Rekognition API call only needs to take place once per image, and can potentially be done as part of the same auto-tagging workflow highlighted in previous posts, with the bounding box stored as an attribute with the image for later retrieval by the SmartCrop functionality. In addition, the output from the cropping can be cached at a CDN or webserver in front of Adobe AEM.

Conclusion

As this post highlights, old problems can be addressed now in new ways. In this case, it turns a task that often was performed manually in something that can be automated. The availability of many turnkey machine-learning services can provide a start to solve existing problems in a new and very simple manner. It will be interesting to see the developments in the coming year on this front.

written by: Martin Jacobs (GVP, Technology)

How Machine Learning Can Transform Digital Asset Management - Part III

In previous articles I discussed the opportunities for machine learning in digital asset management (DAM), and, as a proof of concept, integrated a DAM solution (Adobe AEM DAM with Google Cloud Vision. I followed up with a post on potential alternatives to Google’s Cloud Vision, including IBM Watson and Microsoft Cognitive Intelligence, and integrated those with Adobe DAM as well.

Of course, Amazon AWS couldn’t stay behind, and at the last AWS re:Invent conference, Amazon announced their set of Artificial Intelligence Services, including natural language understanding (NLU), automatic speech recognition (ASR), visual search and image recognition, text-to-speech (TTS). Obviously, it was now time to perform the integration with the AWS AI services in our proof of concept.

Amazon Rekognition

The first candidate for integration was Amazon Rekognition. Rekognition is a service that can detect objects, scenes, and faces in images and makes it easy to add image analysis to your applications. At this point, it offers 3 core services:

  • Object and scene detection - automatically labels objects, concepts and scenes
  • Facial analysis - analysis of facial attributes (e.g. emotion, gender, glasses, face bounding box)
  • Face comparison - compare faces to see how closely they match

Integration Approach

Google’s API was integrated using a SDK, the IBM and Microsoft API’s were integrated leveraging their standard REST interface. For the Amazon Rekognition integration, the SDK route was taken again, leveraging the AWS Java SDK. Once the SDK has been added to the project, the actual implementation becomes fairly straightforward.

Functionality

From a digital asset management perspective, the previous posts focused on auto-tagging assets to support a content migration process or improve manual efforts performed by DAM users.

The object & scene detection for auto-tagging functioned well with Amazon Rekognition. However, the labels returned are generalized. For example, a picture of the Eiffel tower will be labeled “Tower” instead of recognizing the specific object.

The facial analysis API returns a broad set of attributes, including the location of facial landmarks such as mouth and nose. But it also includes attributes such as emotions and gender, which can be used as tags. These can then be beneficial in digital asset management scenarios such as search and targeting.

Many of the attributes and labels returned by the Rekognition API included a confidence score, indicating the confidence around a certain object detection.

Results screenshot

In the proof of concept, a 75% cut off was used. From the example above, you can see that Female, Smile and Happy have been detected as facial attributes with a higher than 75% confidence.

Summary

The source code and setup instructions for the integration with AWS, as well as Google, Microsoft, and IBM’s solutions can be found on Github in the Razorfish repository.

One thing all the different vendors have in common is that the services are very developer focused, and integrating these services with an application is very straightforward. This makes adoption easy. Hopefully, the objects being recognized will become more detailed and advanced over time, which will improve their applicability even more.

written by: Martin Jacobs (GVP, Technology)

AEM on AWS whitepaper

For a number of years now, I have been working on solutions that involve Adobe Experience Manager (AEM) and Amazon Web Services (AWS) to deliver digital experiences to consumers. Partly through custom implementations for our clients, and sometimes through Fluent, our turnkey digital marketing platform that combines AEM, AWS and Razorfish Managed Services and DevOps.

In the past year, I had to pleasure of working with AWS on creating a whitepaper that outlines some of our ideas and best practices around deploying on the AWS infrastructure.

The whitepaper delves into a few areas:

Why Use AEM on AWS?

Cloud-based solutions offer numerous advantages over on-premise infrastructure, particularly when it comes to flexibility.

For example, variations in traffic volume are a major challenge for content providers. After all, visitor levels can spike for special events such Black Friday Shopping, the Super Bowl, or other one-time occasions. AWS’s Cloud-based flexible capacity enables you to scale workloads up or down as needed.

Marketers and businesses utilize AEM as the foundation of their digital marketing platforms. Running it on AWS facilitates easy integration with third-party solutions, and make it a complete platform. Blogs, social media, and other auxiliary channels are simple to add and to integrate. In particular, using AEM in conjunction with AWS allows you to combine the API’s from each into powerful custom-configurations uniquely suited to your business needs. As a result, the tools for new features such as mobile delivery, analytics, and managing big data are at your fingertips.

A Look Under the Hood – Architecture, Features, and Services

In most cases, the AEM architecture involves three sets of services:

  1. Author – Used for the creation, management, and layout of the AEM experience.

  2. Publisher – Delivers the content experience to the intended audience. It includes the ability to personalize content and messaging to target audiences.

  3. Dispatcher – This is a caching and/or load-balancing tool that helps you realize a fast and performant experience for the end-user.

The whitepaper details some common configuration options for each service, and how to address that with the different storage options AEM provides. It also outlines some of the security considerations when deploying AEM in a certain configuration.

AWS Tools and Management

Not only does AWS allow you to build the right solution, it provides additional capabilities to make your environment more agile and secure.

Auditing and Security

The paper also highlights a couple of capabilities to make your platform more secure. For example, AWS has audit tools such as AWS Trusted Advisor. This tool automatically inspects your environment and makes recommendations that will help you cut costs, boost performance, improve, reliability, and enhance security. Other recommended tools include Amazon Inspector, which scans for vulnerabilities and deviations from best practices.

Automation through APIs

AWS provides API access to all its services and AEM does the same. This allows for a very clean organization of the integration and deployment process. Used in conjunction with an automated open-source server like Jenkins you can initiate manual, scheduled, or triggered deployments.

You can fully automate the deployment process. However, depending on which data storage options are used, separate arrangements may need to be made for backing up data. Also, policies and procedures for dealing with data loss and recovery need to be considered too.

Additional AWS Services

There are numerous other services and capabilities you can leverage when you use AEM in conjunction with AEM.

One great service is Amazon CloudWatch, which allows you to monitor a variety of performance metrics from a single place.

In addition, the constant stream of new AWS offerings, such as, Amazon’s Elastic File System (which allows you to configure file storage for your servers), provide new options for your platform.

Takeaway

Using Adobe’s Experience Manager in tandem with AWS provides a powerful platform for easily delivering highly immersive and engaging digital experiences. It is a solution that answers the dilemma that many marketers and content providers face – how to deliver an immersive, relevant, and seamless experience for end users from a unified platform, and the whitepaper aims to provide more insight into how to achieve this.

To learn more about running AEM on AWS, you can download and read the full whitepaper here.

written by: Martin Jacobs (GVP, Technology)

Lessons Learned from a Reactive Serverless CMS

Background

As mentioned in previous posts, we are big proponents of reactive architectures at Razorfish.

We also believe architectures using cloud functions — such as AWS Lambda — are part of the future of application development. In this post, we will call them “serverless” architectures because although there are obviously still servers involved, we’re not responsible for managing them anymore.

The relaunch of our technology blog provided the perfect opportunity to test this new architecture. In the paragraphs that follow, I’ll briefly discuss the architecture, followed by a summary of the lessons we learned.

Solution Summary

We architected the solution using Amazon AWS S3, Lambda, Cloudfront, Hugo, and Github. It incorporates an authoring UI, as well as a mechanism to do publishing. The diagram below shows some of the integration mechanisms. For the full technical details of the implementation, visit the earlier post on the Razorfish technology blog.

Learning — Serverless: Development Model

Obviously, development using AWS Lambda is quite different than your standard processes. But there’s good news: A large number of new tools are trying to address this, and we explored a few of them. Some of the most interesting include:

  • Lambda-local. This is a basic command line tool you can use to run the Amazon Lambda function on local machines.
  • Node-Lambda. Similar to Lambda, this tool provides support for deploying a function to AWS.
  • Apex. This large framework can be used to deploy lambda functions, potentially written in additional languages such as Go — which Apex itself is written in. The tool provides support for Terraform to manage AWS resources.
  • Kappa — Another tool for deployment of Lambda functions, using the AWS API for creation of resources.
  • Serverless. An application framework for building applications using AWS Lambda and API Gateway. It tries to streamline the development of a microservices-based application. It creates AWS resources using CloudFormation templates, giving you the benefits of tracking and managing resource creation. It also supports different types of plugins, allowing you to quickly add additional capabilities to an application (e.g., logging). One of the objectives of the tool is to support multiple cloud providers, including Google and Azure Cloud Functions.
  • λ Gordon — Similar to Apex, a solution to create and deploy lambda functions, using CloudFormation to manage these resources, with a solid set of example functions.
  • Zappa. Zappa allows you to deploy Python WSGI applications on AWS Lambda + API Gateway. Django and Flask are examples of WSGI applications that can now be deployed on AWS Lambda using Flask-Zappa or Django-Zappa. In addition to these tools, IDE’s have developed ways to make it easier to create and deploy lambda functions. For example, Visual Studio and Eclipse have tools to make it easier to create, test, and deploy functions.

Lambda-local was the tool of choice for the serverless CMS application created for our blog. Its simplicity is helpful, and one of the unique challenges we faced was the support needed for binaries like Hugo and Libgit2, which required development both on the local machines and on an Amazon EC2 Linux instance.

Learning — Serverless: Execution Model

Although the initial use cases for AWS Lambda and other similar solutions have been styled around executing backend tasks like image resizing, interactive web applications can become an option as well.

For a start, many solutions don’t necessarily need to be a server side web application, and can often be architected as a static using client-side JavaScript for dynamic functionality. So in the AWS scenario, this means a site hosted on S3 or Cloudfront and then integrate with AWS Lambda using the JavaScript SDK or the API gateway — similar to how this was done for the Razorfish blog.

But in case the dynamic element is more complex, there is a great potential for full-featured frameworks like Zappa that allow you to develop interactive web applications that can run on AWS Lambda using common frameworks such as Django and Flask. In my opinion, this is also where AWS can get significant competition from Azure Functions, as Microsoft has an opportunity to create very powerful tools with their Visual Studio solution.

Overall, AWS Lambda is a great fit for many types of applications. The tool significantly simplifies the management of applications; there’s limited need to perform ongoing server monitoring and management that is required with AWS EC2 or AWS Elastic Beanstalk.

On top of that, Lambda is incredibly affordable. As an example, if you required 128MB of memory for your function, executed it 30 million times in one month for 200ms each time, your monthly bill would be $11.63 — which is cheaper than running most EC2 instances.

The Razorfish technology blog architecture is network intensive. It retrieves and uploads content from S3 or Github. With AWS Lambda, you choose the amount of memory you want to allocate to your functions and AWS Lambda allocates proportional CPU power, network bandwidth, and disk I/O. So in this case, an increase in memory was needed to ensure enough bandwidth for the Lambda functions to execute in time.

Learning — Reactive flows

The second goal of the creation of our blog was to apply a reactive architecture. A refresher: Reactive programming is a programming style oriented around data flows and the propagation of change. Its primary style is asynchronous message passing between components of the solution, which ensures loose coupling.

In the blog scenario, this was primarily achieved by using S3 events, Github hooks, and SNS message passing. Some examples:

  • When one Lambda function was finished, an SNS message was published for another Lambda function to execute.
  • Client-side content updates are posted to S3, and the S3 event generated triggered Lambda functions.
  • A Github update posts to SNS, and the SNS triggers a Lambda function.

Overall, this allowed for a very simple architecture. It also makes it very straightforward to test and validate parts of the solution in isolation.

One of the key challenges, however, is rooted in the fact that there are potential scenarios where it becomes difficult to keep track of all different events and resulting messages generated. This can potentially result in loops or cascading results.

The Developer’s Takeaway

Overall, I believe the architectural style of reactive and serverless has a lot of promise — and may even be transformational with respect to developing applications in the future. The benefits are tremendous, and will allow us to really take cloud architectures to the next level. For this reason alone, developers should consider how this approach could be incorporated into every new project.

written by: Martin Jacobs (GVP, Technology)

A reactive serverless cms for the technology blog

Background

At Razorfish, we are big proponents of reactive architectures. Additionally, we believe architectures using cloud functions such as AWS Lambda are part of the future of application development. Our relaunch of the blog was a good occasion to test this out.

Historically, the blog had been hosted on WordPress. Although WordPress is a good solution, we had run into some performance issues. Although there are many good ways to address performance challenges in WordPress, it was a good time to explore a new architecture for the blog, as we weren’t utilizing any WordPress specific features.

We had used static site generators for a while for other initiatives, and looked at these types of solutions to create the new site. We wanted to avoid any running servers, either locally or in the cloud.

Our technology stack ended up as follows:

  • Github – Contains two repositories, a content repository with Hugo based themes, layout and content, and a code repository with all the cms code.

  • AWS Lambda

  • Hugo – Site Generator written in the Go Programming Language

  • AWS S3 – Source and generated sites are stored on S3

  • AWS CloudFront – CDN for delivery of site.

Why Hugo?

There are a large number of site generators available, ranging from Jekyll to Middleman. We explored many of them, and decided on Hugo for a couple of reasons:

  • Speed – Hugo generation happens in seconds

  • Simplicity - Hugo is very easy to install and run. It is a single executable, without any dependencies

  • Structure - Hugo has a flexible structure, allowing you to go beyond blogs.

Architecture

The architecture is outlined below. A number of Lambda functions are responsible for executing the different functions of the CMS. Some of the use of Hugo was derived from http://bezdelev.com/post/hugo-aws-lambda-static-website/. The authentication function was loosely derived from https://github.com/danilop/LambdAuth.

The solution uses AWS lambda capabilities to run executables. This is used for invoking Hugo, but also for incorporating libgit2, which allows us to execute git commands and integrate with Github.

CMS

As part of the solution, a CMS UI was developed to manage content. It allows the author to create new content, upload new assets, and make other changes.

Content is expected to be in Markdown format, but this is simplified for authors with the help of the hallojs editor.

Preview is supported with different breakpoints, including a mobile view.

As it was developed as a reactive architecture, other ways to update content are available:

  • Through a commit on github, potentially using github’s markdown editor.

  • Upload or edit markdown files directly on S3

Challenges

As the solution was architected, a few interesting challenges had to be addressed.

  1. At development, only Node 0.14 was supported on AWS. To utilize solutions like libgit2, a more recent version of Node was needed. To do so, a Node executable was packaged as part of the deploy, and Node 0.14 spawned the more recent Node version.

  2. Only the actual site should be accessible. To prevent preview and other environments from being accessible, CloudFront signed cookies provided a mechanism to prevent the other environments from being directly accessible.

  3. Hugo and libgit are libraries that need to be compiled for the AWS Lambda linux environment, which can be a challenge with all other development occurring on Windows or Macs.

Architecture benefits

The reactive architecture approach makes it really easy to enhance and extend the solution with other options of integrating content or experience features.

For example, as an alternative to the described content editing solutions above, other options can be identified:

  • A headless CMS like Contentful could be added for a richer authoring UI experience.

  • By using SES and Lambda to receive and process the email, an email content creation flow could be setup.

  • A convertor like pandoc on AWS Lambda can be incorporated into the authoring flow, for example for converting source documents to the target markdown format. It possibly can be invoked from the CMS UI, or from the email processor.

From an end-user experience perspective, Disqus or other 3rd party providers are obvious examples to incorporate comments. However, the lambda architecture can also be an option to easily add commenting functionality.

Conclusion

Although more and more tools are coming available, AWS Lambda development and code management can still be a challenge, especially in our scenario with OS specific executables. However, from an architecture perspective, the solution is working very well. It has become very predictive and stable, and allows for a fully hands-off approach on management.

written by: Martin Jacobs (GVP, Technology)

How Machine Learning Can Transform Content Management

In previous posts, I explored the opportunities for machine learning in digital asset management, and, as a proof-of-concept, integrated a DAM solution (Adobe AEM DAM) with a set of machine learning APIs.

But the scope of machine learning extends much further. Machine learning can also have a profoundly positive impact on content management.

In this context, machine learning is usually associated with content delivery. The technology can be used to deliver personalized or targeted content to website visitors and other content consumers. Although this is important, I believe there is another opportunity that stems from incorporating machine learning into the content creation process.

Questions Machine Learning Can Answer

During the content creation process, content can be analyzed by machine learning algorithms to help address some key questions:

  • How does content come across to readers? Which tones resonate the most? What writer is successful with which tone? Tone analysis can help answer that.
  • What topics are covered most frequently? How much duplication exists? Text clustering can help you analyze your overall content repository.
  • What is the article about? Summarization can extract relevant points and topics from a piece of text, potentially helping you create headlines.
  • What are the key topics covered in this article? You can use automatic topic extraction and text classification to create metadata for the article to make it more linkable and findable (both within the content management tool internally and via search engines externally).

Now that we know how machine learning can transform the content creation process, let’s take a look at a specific example of the technology in action.

An Implementation with AEM 6.2 Content Fragments and IBM Watson

Adobe just released a new version of Experience Manager, which I discussed in a previous post. One of the more important features of AEM 6.2 is the concept of “Content Fragments.”

In the past, content was often tied to pages. But Content Fragments allow you to create channel-neutral content with (possibly channel-specific) variations. You can then use these fragments when creating pages.

Content Fragments are treated as assets, which makes them great candidates for applying analysis and classification. Using machine learning, we’re able to analyze the tone of each piece of content. Tones can then be associated with specific pieces of content.

In the implementation, we used IBM Bluemix APIs to perform tone analysis. The Tone Analyzer computes emotional tone (joy, fear, sadness, disgust or anger), social tone (openness, conscientiousness, extraversion, agreeableness or emotional range) and language tone (analytical, confident or tentative).

The Tone Analyzer also provides insight on how the content is coming across to readers. For each sub-tone, it provides a score between 0 and 1. In our implementation, we associated a sub-tone with the metadata only if the score was 0.75 or higher.

The Results of Our Implementation

If you want to take a look, you’ll find the source code and setup instructions for the integration of Content Fragments with IBM Watson Bluemix over on GitHub.

We ran our implementation against the text of Steve Jobs’ 2005 Stanford commencement speech, and the results are shown below. For every sub-tone with a score of 0.75 or higher, a metadata tag was added.

Results

The Takeaway from Our Implementation

Machine learning provides a lot of opportunities during the content creation process. But it also requires authoring and editing tools that seamlessly integrate with these capabilities to really drive adoption and make the use of those machine-learning insights a common practice.

Personally, I can’t wait to analyze my own posts from the last few months using this technology. That analysis, combined with LinkedIn analytics, will enable me to explore how I can improve my own writing and make it more effective.

Isn’t that what every content creator wants?

written by: Martin Jacobs (GVP, Technology)

How Machine Learning Can Transform Digital Asset Management - Part II

A few weeks ago, I discussed the opportunities for machine learning in digital asset management (DAM), and, as a proof of concept, integrated a DAM solution (Adobe AEM DAM) with Google Cloud Vision, a newly released set of APIs for image recognition and classification.

Now, let’s explore some alternatives to Cloud Vision.

IBM Watson

To follow up, we integrated IBM’s offering. As part of BlueMix, IBM actually has two sets of APIs: the AlchemyAPI (acquired in March 2015) and the Visual Recognition API. The ability to train your own custom classifier in the Visual Recognition API is the key difference between the two.

There are a number of APIs within the AlchemyAPI, including a Face Detection/Recognition API and an Image Tagging API. The Face API includes celebrity detection and disambiguation of a particular celebrity (e.g., which Jason Alexander?).

Result sets can provide an age range for the identified people in the image. In a DAM scenario, getting a range instead of a number can be particularly helpful. The ability to create your own customer classifier could be very valuable with respect to creating accurate results for your specific domain. For example, you could create a trained model for your products, and organize your brand assets automatically against this. It would enable to further analyze usage and impact of these assets across new and different dimensions.

Leveraging the APIs was fairly straightforward. Similar to Google, adding the API to an application is simple; just build upon the sample API provided by IBM. It’s worth noting, however, that IBM’s API has a 1MB image size restrictions, somewhat lower than Google and Microsoft’s 4MB limit.

Microsoft Cognitive Services

Microsoft is interesting, especially considering they won the most recent ImageNet Large Scale Visual Recognition Challenge. As part of its Cognitive Services offering, Microsoft released a set of applicable Vision APIs (though they’re still in preview mode). For our purposes, the most relevant APIs are:

  • Computer Vision: This API incorporates an ability to analyze images and derive the appropriate tags with their confidence score. It can detect adult and racy content, and similar to Google’s Cloud Vision API, it has an Optical Character Recognition (OCR) capability that reads text in images. Besides tags, the API can provide English language descriptions of an image — written in complete sentences. It also supports the concepts of models. The first model is celebrity recognition, although we couldn’t get that one to work for straightforward celebrities like Barack Obama and Lionel Messi (it also doesn’t seem to work on the landing page).
  • Emotion: This API uses a facial expression in an image as an input. It returns the confidence level across a set of emotions for each face in relevant images.
  • Face: This API is particularly interesting, as it allows you to perform face recognition within a self-defined group. In a DAM scenario, this could be very relevant. For example, when all product images are shot with a small set of models, it can easily and more accurately classify each image with respect to various models. If an organization has contracts with a small set of celebrities for advertising prints, classification becomes that much more accurate.

The Microsoft APIs are dependent on each other in certain scenarios. For example, the Emotion API leverages the Face API to first identify faces within an image. Similarly, the Computer Vision API and Face API both identify gender and other attributes of people within an image.

Although Microsoft didn’t provide a sample Java API, the REST API is easy to incorporate. The source code and setup instructions for the integration with Google, Microsoft, and IBM’s solution can be found on Github.

Adobe Smart Tags

At the recent Adobe Summit conference, Adobe also announced the use of machine intelligence for smart tagging of assets as a beta capability of their new AEM 6.2 release. According to Adobe, it can automatically tag images with keywords based on:

  • Photo type (macro, portrait, etc.)
  • Popular activities (running, skiing, hiking, etc.)
  • Certain emotions (smiling, crying, etc.)
  • Popular objects (cars, roads, people, etc.)
  • Animals (dogs, cats, bears, etc.)
  • Popular locations (New York City, Paris, San Francisco, etc.)
  • Primary colors (red, blue, green)

There are even more categories for automatic classification, too.

Automatic Tagging Use Cases

In the previous post, I highlighted a couple of key use cases for tagging using machine intelligence in DAM. In particular, I highlighted how tagging can support the content migration process or improve manual efforts performed by DAM users.

Better metadata makes it easier for authors, editors, and other users of the DAM to find content during the creation process. It can also help in providing asset recommendations to content authors. For example, it’s often important to have a diverse mix of people portrayed on your site. With gender, age, and other metadata attributes as part of the image, diversity can be enforced using asset recommendation or asset usage reports within a DAM or content management system.

What’s more, this metadata can also help improve targeting and effectiveness of the actual end-user experience by:

  1. Allowing the image to be selected as targeted content
  2. Using the metadata in an image to ensure relevant ads, content, and assets are presented in context within an asset
  3. Informing site analytics by incorporating image metadata in click tracking and other measurement tools

In addition to these use cases, new scenarios are being created. Microsoft automatically generates captions your photos. Facebook is using machine intelligence to automatically assign alt text to photos uploaded to Facebook, and, in doing so, improve overall accessibility for Facebook users. Obviously, this type of functionality also will also enable Facebook and Microsoft to provide more targeted content and ads to users interacting with specific photos, a win-win. As metadata is used for end-user consumption in these cases, the unique challenge of really needing to support multilingual tagging and descriptions arises, with its own set of challenges.

With companies like Adobe, IBM, Google, and Microsoft pouring a ton of resources into machine learning, expect a lot of changes and improvements in the coming years. Relatively soon, computers will outperform humans in classification and analysis.

As it relates to Digital Asset Management, it remains to be seen precisely what the exact improvements will be. But one thing is certain: Machine learning technology promises a lot of exciting possibilities.

written by: Martin Jacobs (GVP, Technology)

How Machine Learning Can Transform Digital Asset Management

As the use and need for digital assets increase, so too does the cost and complexity of Digital asset management (DAM) — especially in a world where people are adopting devices with screens of all sizes (e.g., desktop, mobile, tablet, etc.).

DAM, however, is a challenge for many organizations. It still involves frequent manual labor, but machine learning is starting to change that.

Machine learning has already given us self-driving cars, speech recognition, effective web searches, and many other benefits over the past decade. But the technology can also play a role in classifying, categorizing, and managing assets in the years to come.

Machine learning can support DAM in areas such as face recognition, image classification, text detection, people recognition, and color analysis, among others. Google PlaNet, for example, can figure out where a photo was taken based on details embedded in it. Google Photos is using it to improve the search experience. Machine learning has already taken a role in image spam detection. Taken together, this all points to the need for DAM tools to start incorporating advanced machine-learning capabilities.

A Practical Test

Recently, Google released its Cloud Vision API. The Google Cloud Vision API enables developers to understand the content of an image by encapsulating powerful machine-learning models in an easy-to-use REST API. It quickly classifies images into thousands of categories (e.g., “sailboat”, “lion”, “Eiffel Tower”, etc.). It detects individual objects and faces within imagines. And it finds and reads printed words contained within images.

For Razorfish, this was a good reason to explore using the Vision API together with a DAM solution, Adobe AEM DAM. The result of the integration can be found on github.

Results screenshot

We leveraged text-detection capabilities, automation classification techniques, and the landmark detection functionality within Google’s API to automatically tag and assign other metadata to assets.

Benefits and Setbacks

Integrating the Vision API provided immediate benefits:

  • Automated text detection can help in extracting text from images, making them easily accessible through search.
  • Automated landmark detection helps in ensuring that the appropriate tags are set on digital assets.
  • Auto-classification can support browse scenarios for finding the right assets.

But there were also some shortcomings. For example, an image of a businesswoman in a white dress was identified as a bride. In other instances, the labels were vague or irrelevant. Though inconvenient, we expect these shortcomings to improve over time as the API improves.

Even with these drawbacks unaddressed, automated detection is still very valuable — particularly in a DAM scenario. Assigning metadata and tags to assets is usually a challenge, and automated tagging can address that. And since tags are used primarily in the authoring environment, false classifications can be manually ignored while appropriate classifications can help surface assets much broader.

The Evolution of DAM Systems

One frequent point in implementing DAM systems is asset migration. I have seen many clients with gigabytes of assets wonder whether to go through the tremendous effort of manually assigning metadata to them.

There’s a quick fix: Auto-classification techniques using machine learning will improve and speed up this process tremendously.

With the benefits around management and migration, machine learning and other intelligence tools will therefore start becoming a key component of DAM systems — similar to how machine learning is already impacting other areas.

Lastly, incorporating machine learning capabilities in DAM solutions will also have architectural implications. Machine intelligence functionality often uses a services-based architecture (similar to the APIs provided by Google) as it requires a significant or complex set of compute resources. As DAM systems start to incorporate them at its core, it will be more difficult for those solutions to support a classic on-premises approach — causing more and more solutions to migrate to a hosted software as-a-service (SaaS) model.

Bottom line? Consider incorporating machine learning into your DAM strategy now, and look at how it can be applied to your digital asset management process.

written by: Martin Jacobs (GVP, Technology)

CMIS - will it revolutionize the CMS industry?

Last year three major CMS/ECM vendors IBM/Microsoft/ECM came together to propose new standards that will change the CMS landscape the same way SQL 92 did for the database industry.  Content Management Interoperability Services (CMIS) standards cover services that allow interoperability between content stores. These standards cover the three basic areas of Content Management Systems:

  • CMS basic operations - CRUD (Create, Retrieve, Update and delete) services, versioning and workflow

  • Content discovery - query services, search including a SQL like query

  • Domain model – object types, folder hierarchy, document, relationship and access rules

Previous interoperability proposals such as JSR 170283 have not gained traction because they were purely Java based and were too function rich, forcing the vendors to make substantial investments with little or no market driven need.    Another standard WEBDAV, was too simple and relied solely on HTTP protocol, it had no concepts of content types or content relationships.

CMIS supports both a SOAP based interface and REST based interface, the latter is much easier to implement.  Last month EMC, Microsoft, IBM and Alfresco were able to implement a draft CMIS and test it on Sharepoint/Documentum/Filenet and Alfresco.

The proposed CMIS query supports SQL like terms and clauses such as SELECT, FROM, WHERE and CONNECT by clause.  The query can include based terms and clauses based on content metadata and property such as size, date etc.  Example query:

_SELECT * FROM DOCUMENT WHERE ((CONTENT_STREAM_MIMETYPE = ‘MSWORD’) AND (CONTAINS ‘Razorfish’))

This new draft CMIS standard creates a clear firewall between applications and content stores.  It will cut application development and integrations costs, and eliminate time learning vendor specific content access APIs.  Imagine being able to design an application that can access and manipulate content from any content and change the underlying content store by merely changing an entry in some property file.   For the vendors the outlook may be murky initially, it is possible that the number of competing CMS/ECM products may shrink.  Nevertheless, the market penetration of CMS products will increase dramatically and CMS/ECM may be as ubiquitous as databases.  Microsoft’s involvement brings up the possibility that all MS Office products may support direct check in/check out from CMIS based repositories.

The CMIS draft was submitted in September ‘08 to the standards body OASIS for public comment.   It is expected to be approved by middle of 2009.  The draft is also being backed by Oracle and SAP.

Resources

CMIS charter

The draft may be accessed here as a zip.

Ready for Web 3.0/Semantic Web?

When mainstream media starts talking about SemanticWeb, one can infer that it is not just another buzz within research labs.  Recently the magazine The Economist, and BBC online covered this topic.  Early this month Thomson-Reuters announced a service that will help in Semantic Markup. 

SemanticWeb Primer

The term Semantic Web was first used by Sir Tim Berners-Lee, the inventor of World Wide Web, to be “… day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines”.    The most significant aspect of semantic web is the ability of machines to understand and derive semantic meaning from the web content.   The term Web 3.0, was introduced in 2006 as a next generation web with emphasis on semantic web technologies.  Though the exact meaning and functionality in Web 3.0 is vague, most experts agree that we can expect Web 3.0 in some form starting in year 2010.

There are two approaches to extract semantic knowledge from web content.  The first involves extensive natural language processing of content, while the second approach places the burden on content publishers to annotate or markup content.  This marked-up content can be processed by search engines, browsers or intelligent agents.  This solution overcomes the shortcomings of natural language processing which tends to be non-deterministic; furthermore determining the meaning depends not only on the written text, but also on information that is not captured in written text.  For instance, an identical statement by Jay Leno or from Secretary Hank Paulson may have a totally different meaning.    

The ultimate goal of web 3.0 to provide intelligent agents that can understand web content , is still a few years away.  Meanwhile, we can start capturing information and start building constructs in our web pages to facilitate search engines and browsers to extract context and data from content.  There are multiple ways of doing semantic markup of web content that is understood by browsers and search engines.    

Semantic Search Engines

On Sept 22, 2008 Yahoo announced that it will be extracting rdfa data from web pages.   This is a major step in improving the quality of search results.  Powerset (recently acquired by Microsoft) is initially allowing semantic searches on content from wikipedia.org, which is a fairly structured content.  While Hakia uses a different approach, it processes unstructured web content to gather semantic knowledge.  This approach is language based and dependent on grammar.

Semantic markup s- RDFa, and microformats

W3C consortium has authored specifications for annotation using RDF an XML based standard, that formalizes all relationships between entities using triples.  A triple is a notation involving a subject, object and a predicate, for example “Paris is the capital of France” the subject being Paris, the predicate is capital, while ‘France’ is the object.  RDFa is an extension to XHTML to support semantic markup that allows RDF triples to be extracted from web content.

Microformats are simpler markups using XHTML and HTML tags which can be easily embedded in web content.  Many popular sites have already started using microformats.  Flickr uses geo for tagging photo locations, hCard and XFN for user profile.  LinkedIn  uses hcard, hResume and XFN on user contacts.

Microformat hCard example in html  and resulting output on browser page.

 Atul Kedar         Atul Kedar      
Avenue a Avenue A | Razorfish
  
      
1440 Broadway
       New York,,        NY       USA   

Atul Kedar Avenue A | Razorfish1440 BroadwayNew York, NY USA
Microformat hCalendar entry example with browser view:

October 16th : September 18th, 2008 Web 3.0 at Sunnyvale, CA

Tags:  SemanticWeb

 

 

 As you notice from the above examples microformats can be added to existing content and are interpreted correctly by the browsers.  There are many more entities that can be semantically tagged such as places, people and organizations.   Some web browser enhancements (Firefox) recognize these microformats and allow you to directly add them to your calendar or contacts by a single click.  

Automated Semantic markup services and tools

Another interesting development is in the area of automatic entity extraction from content, these annotation application or web services are being developed.  Thomson Reuters is now offering a professional service OpenCalais to annotate content. PowerSet is working on towards similar offerings.   These service reduces the need for content authors to painfully go thru the content and manually tag all relationships. Unfortunately, these services are not perfect and need manual crosschecking and edits.  Other similar annotation services or tools are Zementa, SemanticHacker and  Textwise.

Next Steps

As Web 3.0 starts to take shape, it will initially affect the front end designers involved with the web presentation layer, as organizations demand more semantic markup within the content.  In due course , CMS architects will have to update design of data entry forms, design of entity information records in a manner that facilitates semantic markup and removes any duplication of entity data or entity relationships.  Entity data such as author information, people information, addresses, event details, location data, and media licensing details are perfect candidates for new granular storage schemes and data entry forms.