Cloud Computing Archive

Highlights from AWS re:Invent 2016

Another AWS re:Invent is behind us and it was packed with exciting announcements including the launch of new products, extension of existing services and much more. It was the biggest re:Invent ever with approximately a whopping 32,000 attendees and numerous exhibitors.

The conference was kicked off with a keynote from Andy Jassy, CEO of Amazon Web Services, who presented some impressive growth numbers and announced host of new updates to AWS portfolio of services. Biggest announcements were around new Artificial Intelligence (AI) services called Lex, Rekognition and Polly and data migrations appliances Snowmobile and Snowball Edge. He also launched Amazon Lightsail, which allows developers to setup a virtual private server (VPS) with just a few clicks.

The second keynote, presented by Amazon Web Services CTO Werner Vogels, was more focused on new development tools, Big Data, Security and Mobile services.

Here’s a rundown of the key announcements coming out of re:Invent this year.  

Amazon AI

One of the most significant announcement from Andy Jassy’s keynote was the launch of Amazon Lex, Amazon’s first AI service. Amazon Lex is a service for building conversational interfaces into any application using voice and text. It’s the technology that’s at the heart of the Amazon Alexa platform. This chat bot-friendly service is in preview.

Another AI service launched was Amazon Rekognition. Rekognition allows developers to add image analysis to applications. It can analyze and detect facial features and objects such as cars and furniture. Jassy also announced launch of Amazon Polly, which converts text into speech. Polly is a fully managed service and you can even cache responses making it cost efficient. It is available in 47 voices and 27 languages.  

Internet of Things (IoT)

AWS Greengrass is another interesting service launched at re:Invent. AWS Greengrass lets you run local compute, messaging & data caching for connected devices in a secure way. Greengrass seamlessly extends AWS to devices so they can act locally on the data they generate, while still using the cloud for management, analytics, and durable storage. It allows IoT devices to respond quickly to local events, operate with intermittent connections, and minimize the cost of transmitting IoT data to the cloud.

Data storage and services

Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon’s Simple Storage Service (S3) using SQL. It is a great addition since it allows developers to use standard SQL syntax to query data that’s stored in S3 without setting up the infrastructure for it. This service works with CSV, JSON, log files, delimited files, and more.

Amazon Aurora, cloud-based relational database, now supports PostgreSQL. It’s already compatible with open source standards such as MySQL and MariaDB.

Serverless

AWS Lambda, a serverless computing service, got a couple of updates as well. Amazon announced Lambda@Edge, the new Lambda-based processing model allows you to write code that runs within AWS edge locations. This lightweight request processing logic will handle requests and responses that flow through a CloudFront distribution. It is great for developers who need to automate simple tasks in their CDN deployment so that traffic does not have to be routed back to a server.

Lambda functions now includes support for the Microsoft’s C# programming language. It already supports Node.js, Python and Java. Amazon also unveiled AWS Step Functions as a way to create a visual state machine workflow out of your functions.  

Compute

As is tradition at re:Invent, Amazon announced a series of new core computing capabilities for its cloud. It launched F1 instances that support programmable hardware, R4 memory optimized instances, T2 burstable performance instances, compute-optimized C5 and I/O intensive I3 instances. Andy Jassy also announced Amazon EC2 Elastic GPUs, a way for people to attach GPU resources to EC2 instances. With Elastic GPUs for EC2 you can easily attach low-cost graphics acceleration to current generation EC2 instances.

Another important compute service launched is Amazon Lightsail. It allows developers to launch a virtual private server with just a few clicks. I think it is great addition to the portfolio as it allows small business owner and blogger to host their websites on AWS.  

Migration/ Data Transfer

Expanding on the scope of the Snowball which was launched last year, AWS added Snowball Edge and Snowmobile to the lineup. While Snowball provided 50TB of storage, each Snowball Edge appliance has 100TB of storage and offers more connectivity protocols than the previous version. Now you have also have Snowmobile to meet the needs of the customers with petabytes of data. Snowmobile is a 45-foot container that is delivered to customers on a trailer truck. This secure data truck stores up to 100 PB of data and can help companies move Exabyte of data to AWS in a matter of weeks instead of years. Snowmobile attaches to the clients network and appears as a local, NFS-mounted volume.  

Development tools

Amazon added AWS CodeBuild to the existing suite of developer tools like Code Commit, Code Deploy and Code Pipeline. AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. CodeBuild can be a cost effective and scalable alternative to running a dedicated Jenkins instance.

AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a micro services architecture. X-Ray provides an end-to-end view of requests as they travel through the application and helps developers identify and troubleshoot the root cause of performance issues and errors. AWS X-Ray is in preview.  

Monitoring Operations and Security

Similar to AWS Services Health Dashboard, AWS now provides a Personal Heath Dashboard. As the name indicates, this dashboard gives you a personalized view into the performance and availability of the AWS services that you are using, along with alerts that are automatically triggered by changes in the health of the services.

DDoS (Distributed Denial of Service) attacks are one very common trouble spot. Amazon new offering is AWS Shield, a DDoS protection service that safeguards web applications running on AWS. AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection. It provides DDoS protection at the DNS, CDN, and load balancer tiers and is available in free and premium flavors.  

Big Data and Compute

AWS Batch, a service for automating the deployment of batch processing jobs is released in preview. AWS Batch enables developers, administrators, and users to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. With Batch users have access to the power of the cloud without having to provision, manage, monitor, or maintain clusters. No software to buy or install. AWS Glue a fully managed ETL service that makes it easy to move data between your data stores was also launched.

Mobile Services

Dr. Vogels also launched Amazon Pinpoint, a mobile analytics service. Amazon Pinpoint makes it easy to run targeted campaigns to drive user engagement in mobile apps through the use of targeted push notifications.

AWS refers to re:Invent as an educational event, and they were very successful in achieving this in 2016. You can find the recording of keynote and tech talks on YouTube.

written by: Praveen Modi (Sr Technical Architect)

What the Rise of Cloud Computing Means for Infrastructure

Infrastructure setup and application programming are merging into simultaneous processes. With this critical change, we need to take a fresh look at how we design solutions. If we don’t, our projects risk failure.

Building and installing system infrastructure (think servers and networks) was once an arduous process. Everything had to be planned out and procured, often at high costs and with a long lead time. Often times, server specifications were created before the actual application (and the technologies involved) that would need to run on it had been fully flushed out. The actual application programming task was a whole separate step with little overlap.

That’s no longer the case due the rise of Cloud computing. Infrastructure is now software, and the convenience of that leads to new challenges.

Merging Designs

With Cloud computing, Infrastructure is way more fluid thanks to all the programmable elements. As a result, upfront planning isn’t as important, as cost and especially timelines are not a constraint anymore. Compute, storage and network capacity is immediately accessible and can be changed dynamically to suit any need.

With these changes, the days of separate tracks for application and infrastructure development are over. The once separate design processes for each of them need to merge as well. This is largely driven by 3 factors:

  1. Historically, the separation of application and infrastructure development didn’t work, but it was accepted as a given.
  2. Cloud architectures take on a bigger role than traditional infrastructure
  3. New Architectures create new demands

The Historical Challenge

Performance, availability and scalability have been a challenge forever. Before cloud architectures became standard, vendors have been trying to address these requirements with complex caching architectures, and similar mechanisms. The reality is that none of the products really delivered on this premise out of the box. Obviously, one core challenges was that companies were trying to deliver dynamic experiences on a fixed infrastructure.

But even within that fixed infrastructure, any deployment required exhaustive performance tuning cycles and vendor support, trying to overcome the issue of infrastructure independently designed from the application, with only moderate success.

The Changing Infrastructure Role

Cloud architectures also start to play a bigger role in the overall systems stack. Let’s look at a hypothetical basic Java application with an API build on Amazon Web Services, the most popular cloud computing service, to see what the merger of system infrastructure and application programming looks like.

The application can be developed like any other Java application, but once it comes to how security is addressed, what is specified where?

On the application side, there could be some internal security mechanisms that define what access to services is available. Internal application roles can determine what access to different data elements the service request has. From an infrastructure perspective, Amazon Web Services can also provide security measures (access to ports, another layer of permissions, etc.) that affect how the application API can be accessed by clients. In addition, Amazon’s AWS policies can define which request arrives at the application, or which data elements are available once a request is being serviced.

As this example shows, the application and infrastructure views need to be merged in order to fully understand the security mechanisms available. Just focusing on one side or the other paints an unclear picture.

New Architectures

A number of new architectures have been created now that infrastructure is programmable. Reactive architectures and code executors like Google Cloud Functions and AWS Lambda are examples of these serverless computing services. Once we start using fully dynamic infrastructures for auto-scaling and micro services, the need for in integrated view of both the application and systems becomes even more important.

Finding New Solutions

Handling infrastructure and application development in an integrated manner is difficult.

One of the challenge is that the design tools to visualize this are lacking. Tools like Cloudcraft help in this regard but a fully integrated view is lacking, especially if you start using new architectures like AWS Lambda. Ideally, there’d be a way to visually layer the different perspectives of an architecture in a way that resembles a Photoshop image. Easily looking at an architecture from the perspective of security, services, data flows, and so on would be incredibly useful.

From a process perspective, infrastructure and application have to be handled with the same processes. This includes code management, defect tracking and deployment. This of course has implications on the skills and technology needed to successfully complete a project, and not all organizations are ready for this yet.

Conclusion

These days, infrastructure and application are intertwined, and an application solution that doesn’t address the infrastructure element is incomplete. Focusing on one without the other cannot address the critical requirements around security, performance, scalability, availability and others. It is important to invest in the tools, processes and people to deliver on this.

written by: Martin Jacobs (GVP, Technology)

Diffusing Automation Arguments: The Inevitability of Automation

As mentioned in one of my previous posts, delivering a successful Cloud architecture necessitates the use of automation. Unfortunately, replacing manual tasks with code takes effort, and is therefore not always used. Here are some key arguments against the adoption of automation:

Priority

“We are already on a tight deadline with all the application features that need to be incorporated.”

Automation is critical to the success and longevity of your product. What’s also true, though, is that this is an industry of tight deadlines, stretch goals, and additional features. You might wonder if you have time to automate.

In this case, unit testing is an interesting comparable situation. Often times, unit testing hasn’t always taken priority in the application development process due to time constraints. It has been put off until the end of development phase with a secondary status. However, unit testing has slowly received the priority it deserves, as it has become clear it provides the benefits in the long run.

And as much as testing is important, automation is even more critical. Automation is an actual part of your runtime application, and should be treated at the same level as your code. The features and capabilities for automation should therefore be included in the application/solution backlog and should be given the same treatment as other features and functionality.

Skills

“We don’t have the skills in-house. Even if we were to use a vendor, we wouldn’t be able to maintain it.”

No doubt, automation is a serious challenge. Automation requires a fundamental shift in mindset for organizations around the need to develop these skills. You may remember that in the early days of web development, it took quite some time for front-end development to become a respected and critical role as say database administration. The automation architect will face a similarly arduous battle for the coming years. For any organization that leverages the Cloud and maintains their own technology platforms, it is a critical role that must be filled or grown within the organization.

Time

“It is faster to do it without automation.”

This is often true for the initial setup. However, considering how quickly Cloud architecture continues to evolve, the time gained from a hasty initial setup could quickly be lost in subsequent change management.

With Cloud architectures incorporating more distinct elements, ensuring consistency across environments is virtually impossible without automation. As a result, without automation, the likelihood of generating defects due to environment mismatches increases quickly when your Cloud architecture grows.

Technologies in Use

“The application technologies we use don’t support automation”

As you architect your application, you identify critical non-functional requirements. For example, security and performance are always part of the decision criteria for the overall architecture stack, and if the technologies selected cannot support the level of performance required, you would evaluate alternative options and select and migrate your architecture to the new solution.

The same applies for automation. If automation cannot be supported with the existing technologies, it is necessary to look at alternatives, and evolve your architecture.

Overwhelming Choices

“We are confused by the technology landscape.”

The amount of solutions in the marketplace can certainly feel paralyzing. There’s Ansible, Chef, and PuppetLabs. There are provisioning tools such as AWS Cloud Formation, Heat, Terraform, and Cloudify. Solutions are constantly evolving, and new vendors are always showing up.

It is difficult to make the right choice of technologies. The selection should be made with the same mindset as selecting the enterprise set of programming languages. It requires an evaluation of which is best suited for the organization. Additionally, a combination of these technologies might be the right solution as well. As you embark on applying automation, here are some tips for being successful:

  • Select a set of automation technologies and stick with it. There will always be pressure to explore alternatives, especially with a quickly changing vendor landscape, but it is important to fully understand your selected technologies before looking at alternatives.
  • Start simple. Amazon Elastic Beanstalk or Heroku are great ways to begin to incorporate automation into your application development workflow and understand how it can further drive productivity and quality.
  • Avoid the framework syndrome and focus primarily on building the automation that is needed for your application. Don’t try to build a framework for automation in the enterprise. The landscape is constantly evolving and frameworks quickly become outdated and superseded.

written by: Martin Jacobs (GVP, Technology)

The Cloud and the 100% Automation Rule

Automation and the Cloud go hand-in-hand. Without automation, the Cloud is just classic deployment with rented servers, instead of your own. You’ll need automation if you want to successfully deliver in the Cloud. This was the case early on in the Cloud era, and becomes even more important now.

As Cloud environments evolve and extend, Cloud architectures consist of far more distinct elements than a standarddedicated architecture. With the emergence of new tools like AWS Lambda, which allows you to run code without provisioning servers, these distinctions are becoming even more pronounced.

As we know, manual tasks are tricky. It can be challenging to consistently perform manual tasks correctly due to quickly changing technology and human error. For that reason, 100% automation becomes an important objective. Any deviation from full automation will create additional challenges.

For example, AWS Cloud hosting quickly becomes complex as organizations struggle to choose between many different instance types. You might not know whether you’d be better off using M3, M4 or C3.

Each decision has its own cost implications. Unless you have achieved the 100% automation target, you are often locked into an instance type due to the difficulties and risks of switching to another one, eliminating an opportunity to benefit from getting the optimal cost/performance benefit.

Our automation tools have greatly improved but we still have work to do. Unfortunately, 100% automation is not always possible. Frequently, manual steps are still required. When you do so, ensure that the manual process is automated as much as possible. I’ll highlight it with a couple of examples.

Provisioning

Many tools automate the setup process for provisioning development, test, and production environments.From Cloudformation to Ansible, Chef, and Puppet, many steps can be automated, and as a result are traceable and reproducible. That said, it would be nice to automate the updates to the provisioning stack further.

To start, the provisioning stack is often a static representation of an ideal architecture. But we live in a fast-paced world, and business moves quickly. Making automation work in dynamic environments can be tricky, particularly when infrastructure needs change, new capabilities are launched, or pricing needs to be optimized. Once your largely static architecture is in place, it is hard to keep it evolving to take advantage of new capabilities.

AWS launched a NAT gateway offering recently, eliminating the need for a NAT instance. For the majority of AWS customers, switching to a NAT gateway will improve the reliability of the overall architecture. Unfortunately, it can be difficult to ensure that this switch happens pro-actively.

I would recommend a scheduled review of new provider capabilities for inclusion. If something is needed, a high priority ticket is submitted to ensure that these new capabilities are incorporated with the same priority as code enhancements or defects. If necessary, the provisioning of new environments can be blocked until these tickets are addressed.

Management

Tools that automate environment management also exist. Many Cloud environments can deploy patches and upgrades automatically.

However, commercial or open source products are often deployed in these Cloud environments, and many don’t have the tools to automate the communication of new releases, patches or other updates. Checking for updates becomes a manual process.

To automate the manual process, use a tool like versionista.com to check whether a vendor page lists hotfixes and release updates changes. Similar to the provisioning scenario, if a change gets detected, create a ticket automatically with the right priority, ensuring its implementation.

Optimization

We will start to see real savings once we optimize Cloud infrastructure. However, once the architecture is in place it is challenging to optimize further. This must be a critical core capability for any technology team.

We can optimize development and test environments. Often neglected after a system has launched, we have managed to eliminate manual processes by implementing an automatic shutdown of instances after low usage. The DNS entry for the instance is redirected to the continuous integration environment, allowing testers or developers with the right privileges to restart the instance.

We can also improve upon cost management. A common approach for disaster recovery is to copy data snapshots to another region. However, as the site evolves the data size increases and the disaster recovery process becomes more expensive. How do you track when you should re-architect the process?

Cost management tools like Amazon Cost Explorer focus on products (e.g. EC2, bandwidth), not processes or features. To ensure optimal cost management, you should automatically map the cost data mapped to your processes using tags. Enforce the existence of tags through automated checking, and also automate the processing of the report. This will provide the team with clear indications on where to invest in optimization.

Challenges in Automation

Automation, like anything else, has its challenges. For a Cloud-optimized environment, it is critical to reach for the 100%. If you cannot achieve that, automate the necessary manual processes 100%.

written by: Martin Jacobs (GVP, Technology)

Building an IVR system in the cloud

Interactive Voice Response (IVR) systems offer a way for users to interact with existing software applications using voice and keypad input  from their phones.  Below is an exhaustive list of benefits that IVR systems offer.

  • Allow access to software systems through phones in addition to other interfaces like browsers & desktop clients

  • Self service systems reducing support staff

  • Systems that run 247

  • Systems that perform routing based on customer profile, context, etc.

The article will focus on how to build a flexible and extensible IVR  system painlessly using Cloud-based services like Twilio.

Twilio is a Cloud communications company offering IaaS (Infrastructure as a service). Twilio provides telephone infrastructure in the cloud and exposes them as Application Programming Interface (API) using which one can build applications to send and receive phone calls and text messages. Getting started with Twilio is easy.

  • Signup on Twilio.com

  • Buy a number

  • Program the number by connecting it to a HTTP/HTTPS URL. This is the URL that would be invoked when the number is dialed. The URL needs to respond with an XML output, called twiml, which is Twilio’s proprietary XML language. Using twiml,  developers can perform useful functions like playing a text message as speech, gathering data from callers using  keypad, recording  conversations, sending SMS, connecting the current call to any other phone number, etc.

twilio-flow-1

Since the phone numbers can be programmed and controlled using any HTTP/HTTPS URLs, it’s easy to build interactive applications to handle incoming calls. The URLs can be static XML/twiml files or dynamic web applications that may be interacting with a database and other systems and performing any custom business logic.

 In addition, Twilio also provides REST APIs to perform functions like making a call, modifying a live call, collecting call logs, creating queues, buying numbers, sending SMS, etc. There are helper libraries available in all the popular programming languages which provide a wrapper to work with the REST APIs.

From: Khurshidali Shaikh - Razorfish India Team

Apigee and Mashery

There is some pretty cool stuff going on around APIs (application programming interfaces). It’s getting more and more important that you are using APIs to access social graphs and social functionality through API calls to Facebook Connect (now called Facebook for websites) and Twitter API for example. But on the other side of the equation it’s getting more and more imporant for your company to open up your own APIs. Best Buy’s Remix is one of my favorite examples of a company opening up their product catalog so people can build apps on top of the catalog. Think of things like shopping engines or widgets and gadgets for the latest on sale products, etc.

Companies like Apigee and Mashery help insure that you are getting the best performance. Think about it like a caching delivery network for API calls. Some of the caching can be done with Akamai i.e. jSON, but it’s not built for that. Apigee has an offering on top of twtiter for example. Mashery and Apigee are great for exposing your own API’s as well. They can throttle calls to ensure that your application doesn’t fall down if you get a spike in traffic and they can help accelerate delivery to your users through caching. These companies also provide services to manage the community of developers doing things like providing keys for access to the engine, etc. Analytics also start to get interesting. Some have called Apigee’s analytics the Google Aanalytics for apis.

How do we define cloud computing?

It’s comes up again. Folks are asking us to define cloud computing and every time we do, we refine it a little more. At times it’s seemed like Cloud Computing became the new web 2.0 as a blanket term for everything:). I actually think we define it similarly to the Wikipedia definition. For us it breaks down into two categories: cloud services and cloud infrastructure.

Cloud services are defined as technologies that provide a virtual service either through and Open API or through a user interface. Examples range from the classic Salesforce.com to cloud email like Gmail or Twitter and the Twitter Open API, and Facebook Connect. There are lots others, and it’s growing at a frantic pace. Open API’s like Facebook Connect and the Twitter API are incredibly powerful for driving traffic and getting your product, brand, and service out there. In the past we would build a social network from scratch for a web site, that would mean custom application development and maintenance, now we use Javascript and REST to interface with Facebook Connect and we are up and running in a fraction of the time it used to take in the past.

Cloud infrastructure is defined as the virtual and physical infrastructure powering web and digital applications. Cloud infrastructure was strongly enabled through technologies like VMWare that made it possible to make one physical server into 10 or more virtual servers. This coupled with low cast storage created an elastic scalable platform to enable us to do things that weren’t feasible using the old cost models. These services are metered and you only pay as you go, which is a drastic departure from the buy a server, manage and drive it all the time whether you use it or not. While it used to take weeks to get a server up and ready now takes minutes and all you need is a credit card. Companies paving the way include Amazon, Microsoft, and Google, with traditional hosting companies like Rackspace, Savvis, Terremark and others also making these infrastructure services available.

We believe the cloud and it’s ability to scale at a lower cost point will enable more innovation like never before.

Technology Predictions for 2010

Razorfish’s Matt Johnson outlined his predictions for content management over at our CMS blog, www.cmsoutlook.com. Many of his predictions will hold true for web technology at large as well. I see traction and opportunities for:

  • Cloud Options:We will see further movement towards cloud solutions, and more vendors providing SaaS alternatives to their existing technologies. It ties into the need for flexibility and agility, and the cost savings are important in the current economic climate.

  • APIs and SOA:Functionality will be shared across many web properties, and the proliferation of mini apps and widgets will content. APIs are becoming a critical element of any succesful solution. This is also driven by the increased complexity of the technology platform. Solutions we now develop frequently incorporate many different technologies and vendors, ranging from targeting and personalization to social capabilities.

  • Open Source:Not only in content management, but in many other areas, Open Source will start to play an important role. Examples are around search, like Solr, or targeting with OpenX. Cloud computing also further drives the expansion of Open Source. As companies are looking to leverage cloud solutions for agility, the licensing complications with commercial solutions will drive further open source usage.

What do you see as additional trends?

Keeping the cloud open

I really like Matt Asay’s article on why we need to focus on keeping the cloud open and less about keeping the operating system open. If you think of the cloud as an ‘array’ of applications and less of a hosting solution it starts to open up the aperture on it’s true potential. Imaging the ability to stitch together applications across the cloud like you can stitch together data. Basically a yahoo pipes for applications not just data.

Reblog this post [with Zemanta]

SharePoint Conference 2009 - Day 2

The challenge I always have with these conferences is the plethora of choices available to attendees.  I already know what topics I want to focus on:  WCM; Architecting, Developing and Building public facing internet sites, and Social features in 2010.  But even so, there are still time slots where I have narrowed down the choice to 3, and then I have to make the tough decision and hope that I made the right choice.  For the most part, I decided to always go to a 300 or 400 level session, and then just watch the video and the deck online for the 200 sessions I missed.

For the 9am slot, I had to choose between Advanced Web Part Development in VS 2010 and Introduction to Service Applications and Topology.  The architect won over the developer so I went to the Service Applications session. Essentially, 2010 SSP (Shared Service Providers) is replaced by the new Service Applications architecture. You build service applications that can live in a separate Application Server, and you call it from clients, in this case a SharePoint web front end via proxies.  I’m not sure if this is a correct simile, but I kinda liken it to old DCOM architecture. This makes it easier for organizations (and frankly, ISVs) to build Service Applications that can be deployed once and then used in multiple SharePoint web apps, and more, multiple SharePoint farms.

There’s a follow-up session to this about Scaling SharePoint 2010 Topologies for Your Organization, but I skipped that in favor of Overview of SharePoint 2010 Online. SharePoint Online is another product in Microsoft’s “Software as a Service” offerings.  It is essentially a service where Microsoft hosts and manages SharePoint for your organization.  This is part of Microsoft’s Business Productivity Online Suite (BPOS) which also includes Exchange Online, Office Live Meeting, Office Communications Online, Dynamics CRM Online. It is good for small or medium size business but can also be considered for the enterprise in some special cases.  The important thing to note is that this does not have to be an all-or-nothing decision.  SharePoint online is supposed to complement/extend your on premises infrastructure, not necessarily replace it.

In the afternoon, I agonized over Developing SharePoint 2010 Applications with the Client Object Model, Microsoft Virtualization Best Practices for SharePoint but ended up going to Claims Based Identity in SharePoint 2010.  The client object model was really getting a lot of good tweets during and after the session and I see a lot of opportunities there for us to pull SharePoint information via client calls, i.e., Javascript or Silverlight.  The virtualization session focused on Hyper-V so I didn’t feel too bad about missing it. In the Claims Based Identity session, Microsoft introduced their new Identity Framework and explained how it works.  This essentially works like Kerberos where essentially SAML tokens are created.  The good news is that it supports AD, LDAP, SAML.  The bad news is that it doesn’t support OpenID and other standard internet auth schemes/standards… yet.

I wanted to know more about composites and the new Business Connectivity Services (BCS) so I went to Integrating Customer Data with SharePoint Composites, Business Connectivity Services (BCS) and Silverlight.  BCS is one other new thing with 2010 that is interesting.  Allowing SharePoint to create External Content Type that can pull data from external LOB data opens up a lot of possibilities, but most of the demos I’ve seen so far only connects to 1 table.  In the real world, we would be connecting to a more complex table, in a lot of cases - pulling heirarchical data and I wanted to see how this works - more importantly, will it support CRUDQ features.  This session finally demo’d how to connect using a LINQ data source.  Didn’t see the CRUDQ part though, because the demo was read-only data.

For the last session of the day, I chose between Securing SharePoint 2010 for Internet Deployments (400) and SharePoint 2010 Development Best Practices (300).  So of course, I chose the geekier session since security is a hot topic on public facing sites.  However, this is probably one of the more disappointing sessions for me as this was really more targeted towards SP IT Pros than developers.  It is more about hardening your servers and protecting your network.  All these considerations even come default already in Windows 2008.  I probably would have enjoyed the best practices session better even though I was afraid they will be filled with “duh” moments.  I have to check that deck out though, it produced some funny tweets.

Day 2 is also the night of the Conference Party.  This year, the theme is 80’s night at The Beach (Mandalay Bay) with Huey Lewis and the News providing music and entertainment.  Too bad I missed it.

Cloudfront, Amazon's Caching Delivery Network (CDN)

Speed differences between Amazon S3 and CloudF... Image by playerx via Flickr

It’s nice to see Amazon moving into the CDN space with their Cloudfront offering, it seems like the CDN market can definitely use some fresh look at the challenge. It looks like it builds off your usage of Amazon S3 but with an accelerator finding the closest cache server to deliver your content. With this approach it doesn’t seem like a great fit as a CDN for any architecture. The chart on the right is an interesting comparison.

I’ve been intrigued over the last couple of years with Coral Caching. Peer to peer open source caching seems like it’s ripe with opportunity, wouldn’t it be cool if my mediacenter pc, apple tv and other laptops that sit at home idle during the day could be leveraged to help offload servers. I guess it’s a balance of saving power and sleeping or turning off the box vs. using less server power.

This is a diagram of a Wikipedia:Peer-to-Peer ... Image via Wikipedia

Reblog this post [with Zemanta]

How does cloud technology benefit marketing and service organizations?

** **Lots of folks have been asking about how Cloud Computing helps marketing or web development projects. Here’s a couple of the key benefits that have bubbled to the top of the conversations.

  • Cost, cloud services are drastically less expensive than tradition hosting options, so the marketer can do more and innovate more with their money. Cloud services enable some basic things such as faster time to market, so faster results because we can build solutions in less time and not have to wait for an technology team to allocate servers and setup physical devices.

  • Faster scalability to better keep up with the peaks and valleys of marketing campaigns** and user traffic**. In the old days we would have to prepare for an ad, email, keyword, or offline-online campaign and get servers ready on standby. With cloud services we can scale on demand with a lower cost and faster timeline. That’s because we aren’t limited by physical servers

  • Strategically, social services are enabled through cloud computing, new offerings like Facebook connect, Twitter/delicious/reddit/digg/etc. apis, or even Youtube embed capabilities are all cloud services that enable you to drive traffic to your site without having to build your own social network. Facebook connect is a cloud service that enables the portable social graph bringing users to your property. One user post back to a user’s Facebook wall results in three more users accessing your site. So not only do you get exposure, but you save on Google keyword buysJ. In the old days, 3 years ago, we tried to build social networks on sites like flip.com and other properties, now we tie into the cloud service and get the same functionality in a fraction of time .

*lastly, there’s a word of caution around cloud services. Make sure you have some sort of redundancy, i.e. multiple services to achieve the same goal. We worked with Billboard on the latest release of their site which is a great example. See the red arrow as good example, if Facebook goes away, we are still sharing with other services. Other questions arise around redundancy for infrastructure cloud providers. The cloud computing manifesto is at least acknowledging the need for redundancy, but how to get the providers to do it.

Reblog this post [with Zemanta]

Microsoft talking about a private cloud?

Image representing Microsoft as depicted in Cr... Image via CrunchBase

Just a couple of weeks after Amazon’s announcement of their private cloud offering it looks like Microsoft is starting to open discussions in that direction. What’s interesting about Microsoft’s discussion is that are coming at it from two directions. They are a provider to the data centers, hosting providers and enterprises building these offerings as well as a provider directly to the consumer.

Reblog this post [with Zemanta]

.gov is saving money and time with cloud computing

Cnet reports today on how Vivek Kundra, the US Chief Information Officer (CIO), is pushing for more movement into the clould computing space to help save taxpayer dollars. There are definitely huge savings with clould computing and it’s getting harder and harder for enterprises to ignore. Especially with the recent announcement around Amazon’s Private Cloud, it seems like the enterprise barriers to adoption are slowly eroding away.

I did find Vivek’s assertion here, hard to believe,

_“Using a traditional approach to add scalability and flexibility, he said, it would have taken six months and cost the government $2.5 million a year. But by turning to a cloud computing approach, the upgrade took just a day and cost only $800,000 a year.”_

but not knowing all the details it might real. Six months down to one day, sounds too much like pixie dust to me!

Reblog this post [with Zemanta]

Amazon Advances Cloud Computing with the Private Cloud

Clouds above Pacific. The picture also shows a... Image via Wikipedia

Amazon Advances Cloud Computing with the introduction of a private clould. The economics really are powerful enough to force business to take note. Anecdotally I’ve spoken to several highly functional startup web application using the clould succesfully. WIth the advent of more secure private clouds I don’t see how enterprise can stay away much longer.

Reblog this post [with Zemanta]