Announcing the 1.0.0 RTM of Microsoft Azure WebJobs SDK

What is the WebJobs SDK?

The WebJobs feature of Microsoft Azure Web Sites provides an easy way for you to run programs such as services or background tasks in a Web Site. You can upload and run an executable file such as an .exe, .cmd, or .bat file to your web site while running these as triggered or continuous WebJobs. Without the WebJobs SDK, connecting and running background task requires a lot of complex programming. The SDK provides a framework that lets you write a minimum amount of code to get common tasks done.

The WebJobs SDK has a binding and trigger system which works with Microsoft Azure Storage Blobs, Queues and Tables as well as Service Bus. The binding system makes it easy to write code that reads or writes Microsoft Azure Storage objects. The trigger system calls a function in your code whenever any new data is received in a queue or blob.

Scenarios for the WebJobs SDK

Here are some typical scenarios you can handle more easily with the Azure WebJobs SDK:

  • Image processing or other CPU-intensive work.
  • Other long-running tasks that you want to run in a background thread, such as sending emails. Until now you couldn’t do this in ASP.NET because IIS would recycle your app if your app was idle for some time. Now with Always On in Azure Web Sites you can keep the web site from being recycled when the app is idle. Always On ensures that the site does not go to sleep, which means you can run long-running tasks or services using WebJobs and the WebJobs SDK.
  • Queue processing. A common way for a web frontend to communicate with a backend service is to use queues. This is a common producer – consumer pattern.
  • RSS aggregation. If you have a site that maintains a list of RSS feeds, you could pull in all of the articles from the feeds in a background process.
  • File maintenance, such as aggregating or cleaning up log files.
  • Ingress, such as CSV readers, parsing logs and storing data in Tables.

Goals of the SDK

  • Provide a way to make it easier to use Azure Storage when doing any background processing work.
  • The SDK makes it easier to work with Azure Storage. You do not have to deal with writing code to read/ write from storage.
  • Provide a rich diagnostics and monitoring experience without your having to write any diagnostics and logging code.

Features of the SDK

Triggers

Functions will be executed when a new input is detected on a queue or a blob.

Bindings

The SDK supports binding to provides model binding between C# primitive types and Azure storage like Blobs, Tables, Queues and Service Bus. This makes it easy for a developer to read/ write from blobs, tables and queues. This provides the following benefits.

  • Convenience. You can pick the type that’s most useful for you to consume and the WebJobs SDK will take care of the glue code. If you’re doing string operations on a blob, you can bind directly to TextReader/TextWriter, rather than worry about how to convert to a TextWriter.
  • Flushing and Closing: The WebJobs SDK will automatically flush and close outstanding outputs.
  • Unit testability. The SDK makes it possible to unit test your code since you can mock primitive types like TextWriter rather than ICloudBlob.
  • Diagnostics.  Model binding works with the dashboard to give you real time diagnostics on your parameter usage.

The following Bindings are currently supported: Stream, TextReader/ Writer, and String. You can add support for binding to your custom types and other types from the Storage SDK as well.

Azure Queues

The SDK can be used to trigger a function on a new message on a queue. The SDK allows you to easily access the message contents by allowing you to bind to String, Poco (Plain old CLR object), byte[] and Azure storage SDK types. Following are the other mainline features available for queues.

Please read these announcement posts 0.5.0-beta, 0.4.0-beta and 0.3.0-beta for more details.

  • Trigger a function and bind the message contents to String, Poco (Plain old CLR object), byte[] and CloudQueueMessage.
  • Send single or multiple message on queue.
  • Parallel execution with Queues: The SDK will fetch messages for a queue in parallel within a QueueTrigger. This means if a function is listening on a queue as shown below, the SDK will get a batch of 16 (default) queue messages in parallel for this queue. The function is also executed in parallel
  • Handling of poison messages in Azure Queues
  • Access DequeueCount property of the queue
  • Better polling logic for Azure Queues: The SDK implements a random exponential back-off algorithm to reduce the effect of idle queue polling on storage transaction costs.
  • Fast path notifications: The SDK will fast track messages if you are sending messages to multiple queues using the SDK.
  • Configuration options for queue polling: The SDK exposes a few knobs where you can configure the Queue polling behavior.
    • MaxPollingInterval used for when a queue remains empty, the longest period of time to wait before checking for a message to. Default is 1 min.
    • MaxDequeueCount used for when the Queue message is moved to a poison queue. Default is 5,

Azure Blobs

The SDK can be used to trigger a function when a new blob is detected or an existing blob is updated. The SDK allows you to access the blob contents by allowing you to bind to Stream, String, Poco (Plain old CLR object), byte[], TextReader, TextWriter and Azure storage SDK types.

Please read these announcement posts 0.5.0-beta, 0.4.0-beta and 0.3.0-beta for more details.

  • BlobTriggers are only triggered when a new blob is detected or an existing blob is updated.
  • Retry and Error handling for Blobs: This release of the SDK adds support for retrying functions when there was an error processing a blob. A BlobTrigger will be processed up to a specified maximum number of retries (the default is 5 times). When the threshold value is reached and the function is executed 5 times, the SDK will put a message in the queue called “webjobs-blobtrigger-poison”. You can trigger a function using QueueTrigger on this queue and do your custom error handling of the message.

Azure Storage Tables

The SDK allows you to bind to Tables and perform read, write, update and delete operations.

Please read these announcement post for 0.6.0-beta0.5.0-beta0.4.0-beta and 0.3.0-beta for more details.

Ingress is a common scenario when you are parsing files stored in blobs and storing the values in Tables such as CSV readers. In these cases the Ingress function could be writing lots of rows (million in some cases).

The WebJobs SDK makes it possible to implement this functionality easily and allows add real time monitoring capabilities such as number of rows written in the table so you can monitor the progress of the Ingress function.

Azure Service Bus

Similar to Azure Queues the SDK allows you to trigger functions when a new message is sent to Service Bus Queue or topic. The SDK allows you to easily access the message contents by allowing you to bind to String, Poco (Plain old CLR object), byte[] and BrokeredMessage.

Please read this announcement post for 0.3.0-beta for more details.

General

Following are some of the other features of the SDK.

  • Async Support: The SDK supports async functions.
  • CancellationTokens: The functions can take a CancellationToken parameter in your function and the function can receive a cancellation request from the Host
  • NameResolver: The SDK has an extensibility layer which allows you to specify the source for queue names or blob names. For example you can use this feature to pick up queue names from a config file. Look at this sample.
  • WebJobs Shutdown notification: WebJobs has a graceful shutdown notification feature which raises a notification when the WebJob is stopping. The SDK supports this graceful shutdown in WebJobs by notifying you when the WebJob is shutting down. This information is flowed to the function using the CancellationToken. The following function takes in a CancellationToken which will receive the Cancellation request when the WebJob is stopping.

Dashboard for monitoring WebJobs

As WebJobs execute, you can monitor them in real time. You can see their state (Running, Stopped, Successfully completed), last run time and the logs of a particular execution. The following screenshot shows you a view of all WebJobs running in your Website.

When you write a WebJob using the SDK, you get diagnostics and monitoring experience for the functions in your program. For example, let’s say that you have an Image processing WebJob called “ImageResizeAndWaterMark” that has the following flow.

When a user uploads an image to a Blob container called “images-input”, the SDK will trigger Resize function. Resize will process the image and write to “images2-output” container which will trigger the Watermark function. Watermark function will resize the image and write it to “images3-output” Blob container. The following code shows the WebJob described above.

public class ImageProcessing
    {
        public static void Resize(
            [BlobTrigger(@"images-input/{name}")] WebImage input,
            [Blob(@"images2-output/{name}")] out WebImage output)
        {
            var width = 80;
            var height = 80;
            output = input.Resize(width, height);
        }

        public static void WaterMark(
            [BlobTrigger(@"images2-output/{name}")] WebImage input,
            [Blob(@"image3-output/{name}")] out WebImage output)
        {
            output = input.AddTextWatermark("WebJobs", fontSize: 6);
        }
    }

    public class WebImageBinder : ICloudBlobStreamBinder<WebImage>
    {
        public Task<WebImage> ReadFromStreamAsync(Stream input, System.Threading.CancellationToken cancellationToken)
        {
            return Task.FromResult<WebImage>(new WebImage(input));
        }

        public Task WriteToStreamAsync(WebImage value, Stream output, System.Threading.CancellationToken cancellationToken)
        {
            var bytes = value.GetBytes();
            return output.WriteAsync(bytes, 0, bytes.Length);
        }
    }

When you run the WebJob in Azure, you can view the WebJobs Dashboard by clicking the logs link of the “ImageResizeAndWaterMark” in the WEBJOBS tab of Windows Azure Websites portal.

portal

Since the Dashboard is a SiteExtension you can access it by going to the url:  https://mysite.scm.azurewebsites.net/azurejobs.  You will need your deployment credentials to access the SiteExtension. For more information on accessing Site Extension, see the documentation on the Kudu project https://github.com/projectkudu/kudu/wiki/Accessing-the-kudu-service

Function execution details

When you are monitoring a particular execution of this “ImageResizeAndWaterMark” WebJob, you can view invocation details about the functions in the program such as:

  • What are the parameters of this function?
  • How much time did it take for the function to execute.
  • How much time did it take to read from Blob and how many bytes were read/ written.

function_details

Invoke & Replay

In the above example if the WaterMark function fails for some reason, you can upload a new image and Replay WaterMark function, which will trigger the execution chain and call Resize function as well. This is useful to diagnose and debug an issue when you have a complicated graph for chaining functions together. You can also Invoke a function from the dashboard.

Causality of functions

In the above example, we know that when the WaterMark function writes to a Blob, it will trigger the Resize function. The dashboard will show this causality between functions. If you have chained lots of functions which will get triggered as new inputs are detected then it can be useful to see this causality graph.

Search Blobs

You can click on Search for a Blob and get information on what happened to that Blob. For example, in the case of the ImageResizeAndWaterMark, the Blob was written because the WaterMark function got executed. For more details on Search Blobs see this post.

Posted in Uncategorized | Leave a comment

Accessing and Using Azure VM Unique ID

Microsoft has recently added support for Azure VM unique ID. Azure VM unique ID is a 128bits identifier that is encoded and stored in all Azure IaaS VM’ SMBIOS and can be read using platform BIOS commands.

This identifier can be used in different scenarios whether the VM is running on Azure or on-premises and can help your licensing, reporting or general tracking requirements you may have on your Azure IaaS deployments. Many independent software vendors and partners building applications and certifying them on Azure may require to identify an Azure VM throughout its lifecycle and to tell if the VM is running on Azure, on-Premises or on other cloud providers. This platform identifier can for example help detect if the software is properly licensed or help to correlate any VM data to its source such as to assist on setting the right metrics for the right platform and to track and correlate these metrics amongst other uses.

This unique identifier cannot be modified, it can only be queried. Although only new VMs created after 09/18/2014 will have this capability enabled upon creation, VMs that were created prior to 09/18/2014, will automatically get this capability upon next VM restart. Azure Unique VM ID won’t change upon reboot, shutdown (either planned for unplanned), start/stop de-allocate, service healing or restore in place. However, if the VM is a snapshot and copied to create a new instance, Azure VM ID will get changed.

If you have older VMs created and running since this new feature got rolled out (9/18/2014), please restart your VM to automatically get a unique ID which you can start using upon reset.

To access Azure VM ID from within the VM, follow these steps:

Step 1 – Create a VM (only VMs created after 9/18/2014 will have this feature enabled by default)

Follow these steps: How to Create a Custom Virtual Machine

Step 2 – Connect to the VM

Follow these steps: How to Log on to a Virtual Machine Running Windows Server or How to Log on to a Virtual Machine Running Linux

Step 3 – Query VM Unique ID

Linux VMs

Command (Ubuntu):

sudo dmidecode | grep UUID

Example Expected Results:

UUID: 090556DA-D4FA-764F-A9F1-63614EDA019A

Due to Big Endian bit ordering, the actual unique VM ID in this case will be:

DA 56 05 09  FA D4  4f 76 - A9F1-63614EDA019A

Windows VMs

Commands:

$systemEnclosure = Get-WmiObject -class Win32_SystemEnclosure -namespace root\CIMV2
$computerSystemProduct = Get-WmiObject -class Win32_ComputerSystemProduct -namespace root\CIMV2

'BIOS GUID:             "{0}"' -f $computerSystemProduct.UUID

Example expected results: BIOS GUID:

"BC5BCA47-A882-4096-BB2D-D76E6C170534"

No matter what your platform or applications requirements on Azure VM, you can now have access to an Azure VM unique identifier.

If you have older VMs created and running since this new feature got rolled out, please restart your VM to automatically get a unique ID which you can start using upon reset.

Posted in Uncategorized | Leave a comment

Azure Websites Virtual Network Integration

Azure Websites is happy to announce support for integration between your Azure VNET and your Azure Websites. The Virtual Network feature grants your website access to resources running your VNET that includes being able to access web services or databases running on your Azure Virtual Machines. If your VNET is connected to your on premise network with Site to Site VPN, then your Azure Website will now be able to access on premise systems through the Azure Websites Virtual Network feature.

Azure Websites Virtual Network integration requires your Azure virtual network to have a Dynamic routing gateway and to have Point to Site enabled. The feature is being released now in preview and is currently available only at the Standard tier. Standard web hosting plans can have up to 5 networks connected, while a website can only be connected to one network. However, there is no restriction on the number of websites that can be connected to a network.

This Virtual Network feature is accessible through the new Azure Preview portal and shows up alongside Hybrid Connections.
Websites Virtual Networks

Through the new user interface, you can connect to a pre-existing Azure VNET or can create a new one. This capability to attach to an Azure VNET is not something that must be done when creating the website but can be added, changed or removed at any point. The only restrictions are that you must be in the correct pricing plan and that you have not met your quota limit for the plan.

Websites Add virtual network

The Virtual Network feature supports both TCP and UDP protocols and will work with your VNET DNS. Hybrid Connections and Virtual Network are compatible such that you can mix both in in the same website. It is important to note that while there are some use cases that overlap between Virtual Network and Hybrid Connections, they both offer separate things that are very useful in and of themselves.

Hybrid Connections offer the ability to access a remote application. The Hybrid Connections agent can be deployed in any network and connects back to Azure. This provides an ability to access application endpoints in multiple networks and does not depend on configuring a VNET to do so. Virtual Network gives access to all the resources in the VNET and does not require installation of an agent to do so. The Azure Network Site to Site VPN allows enterprises to connect their on premise networks to their Azure Network using the tools they are used to. Both features offer important capabilities and complete the remote data access story for Azure Websites.

 SR. Program Manager, Azure Websites

Posted in Uncategorized | Leave a comment

PowerShell Seen as Key to Azure Automation

The Azure Automation service currently depends on “20 Azure Automation cmdlets,” which allow the scripting of repeatable actions or runbooks on the Microsoft Azure cloud platform. However, when Microsoft releases the Azure Automation service as a product, then that number will double, enabling full automation via PowerShell.

“By Azure Automation’s general availability, we expect to have around 40 cmdlets to allow complete control of Azure Automation via PowerShell,” Microsoft explained.

The 20 currently available Azure Automation cmdlets are listed at this page. Other Azure script resources can be found in Microsoft’s Script Center.The new complete PowerShell management capability will be important not just for automating Microsoft Azure workloads. Microsoft has also claimed that it will enable integration and automation with other cloud services as well.

Setup of Azure Automation seems a bit cumbersome at present. It requires passing the same Automation account name parameter into each Azure Automation cmdlet. That process can be smoothed somewhat by the so-called PowerShell “splatting” technique, Microsoft’s announcement explained. Splatting is a way of passing parameter values to a PowerShell command as a single unit. The technique is described in this TechNet library article.

In any case, PowerShell has been Microsoft’s answer to organizations that may have used graphical user interfaces (GUIs) in the past for repeated tasks but found themselves hobbled by the GUI in larger computing environments. It’s thought to be “easier” to automate runbooks in datacenters by using the command line interface scripting environment of PowerShell vs. using a GUI. Sometimes, the GUI just can’t do the job.

Microsoft has a few other tools besides the Azure Automation Portal that can be used for setting up the automation of Azure tasks. System Center 2012 Service Management Automation handles runbooks the same as Azure Automation, according to Microsoft’s “Runbooks Concepts” description. System Center 2012 Orchestrator also can be used for automation. Orchestrator doesn’t require scripting to set up the workflows, so Microsoft advocates its use if an organization isn’t using the Windows Azure Pack (a bundle of Microsoft Azure integration technologies for enterprises and service providers).

It’s not clear when Microsoft plans to release the Azure Automation service, but it’s expected to cost about $20 per month for the standard version when available. A free but limited version is also part of Microsoft’s release plans. 

 

By Kurt Mackie

 

Posted in Cloud Computing | Leave a comment

Microsoft Cloud – Which SSO option is right for your company?

As IT organizations begin to implement Microsoft Cloud services, the need for a single sign-on capability increases.   Single sign-on, or SSO, allows users to login once with their account and password and gain access all of their systems without having to login in again to each of those systems.   It significantly reduces administrative costs while increasing user productivity.

The different Microsoft Cloud subscriptions (Office 365, Intune, Azure, etc.) all leverage directories hosted by Windows Azure Active Directory.   The directories are created when you setup a subscription and provide a name.   The suffix .onmicrosoft.com is appended to the name you provide and becomes the domain name for users added to the service (e.g. AcmeOffice365.onmicrosoft.com).    

Most companies want to leverage their own domain (e.g. user@acme.com) for user accounts and use those credentials for access to email and other services.  SSO can be setup across your on-premises and cloud Microsoft environments, but since customers have so many different configurations and requirements, designing and configuring an SSO solution can be very complex.  

There are three different ways to achieve single sign-on with Microsoft Cloud services and Windows Azure Active Directory. Each alternative fits an organization’s particular environment and/or requirements.  

No Synchronization

With this alternative, all accounts are created and maintained in Windows Azure Active Directory. Users authenticate through https://login.microsoftonline.com with their organizational account, which is <user name>@<company domain> because the company domain is added to the subscription through the Office 365 or Azure portals.   This SSO alternative is called “No Synchronization” because there is only one directory and therefore no synchronization with an on-premises domain.

Setup for this SSO alternative is the simplest. The company’s domain is first verified through DNS and a new directory is created in Windows Azure Active Directory for the domain.   The directory is marked as the default directory and users are added to that directory.   Exchange Online and DNS are configured to use that domain name.

This option is a good alternative for organizations that are cutting over entirely to cloud based services. It doesn’t require any on-premises components and there is no synchronization to another directory service. It is not a feasible SSO alternative for organizations maintaining on-premises IT services.

Directory and Password Synchronization

Most companies are implementing specific cloud-based services, not necessarily transitioning all services to the cloud. Therefore, they typically have existing Active Directory domains and on-premises IT services that will be maintained going forward.    

With this alternative, accounts and password are maintained in the on premises Active Directory. The account information and password hash values are synchronized to the directory in Windows Azure Active Directory.   This is not actually an SSO solution, it is a “same sign-on” solution. Users that access local resources are authenticated locally by a domain controller, and if that same user accesses a Microsoft cloud resource, they are authenticated again in the cloud. The benefit to them that they use the same logon and password as they do on premises.

Setting up involves:

  • Verifying and registering your domain with Windows Azure Active Directory
  • Installing and configuring DirSync for directory and password synchronization
  • Licensing synched accounts for cloud services

This option is easy to configure and allows users to leverage their same credentials for all IT services. It doesn’t provide a true single sign-on experience and therefore some features, like Exchange free/busy will not work seamlessly.

Directory Synchronization and Active Directory Federation Services

This alternative is the only true SSO solution for users that require access to both on-premises and cloud systems. It is also required to ensure all functionality is available to users of Hybrid Exchange and SharePoint environments. This alternative involves implementing a security token service (typically an ADFS farm) that trusts a federated domain in Windows Azure Active Directory.  All logons are redirected to ADFS and ADFS issues security tokens that are then passed to the trusting services.  

Setting up involves:

  • Obtaining domain certificate from trusted authority
  • Verifying and registering your domain with Windows Azure Active Directory
  • Configuring your domain for federation
  • Installing and configuring DirSync for directory sync
  • Licensing synched accounts for cloud services
  • Installing and configuring ADFS servers
  • Installing and configuring ADFS proxy servers

The option provides a seamless single sign-on experience, but involves a much greater effort to plan, implement and operate.  

Hopefully this has provided you some good information to help you understand your SSO options for Microsoft Cloud services.   Cloud 9 has developed an SSO QuickStart service offering geared toward quickly implementing a Microsoft single sign-on solution. Give us at 1-855 2 CLOUD 9 to learn more about this or other cloud services.

Posted in Cloud Computing, SQL Azure, Uncategorized, Windows Azure | Leave a comment

Trouble getting started with Microsoft Azure?

Cloud 9 has helped several organizations save money and become more agile by moving their on-premises systems to Microsoft Azure, both Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) solutions. We’ve found that while PaaS solutions provide the greatest long term benefit, IaaS solutions are a quicker path to the cloud. You realize the cost savings sooner, gain valuable experience operating cloud services, and are better positioned to move to PaaS in the future.
We have taken the experience gained and intellectual property created over the years and refined them into a services offering for our customers. The QuickStart for Microsoft Azure IaaS is a short, focused engagement that follows a proven methodology to access, design, plan, and execute your migration to an IaaS cloud solution.
The engagement is led by a Cloud 9 Azure consultant that works closely with your IT resources. The key engagement deliverables are:

  • A detailed assessment outlining issues/risks with suggested remediation/mitigation steps
  • A cloud architecture design identifying Azure components, estimated monthly run costs  and key operational processes
  • Detailed migration plan addressing cloud configuration and system cut-over tasks
  • Your IT service running as Microsoft Windows Azure IaaS solution
  • Knowledge transfer on Microsoft Windows Azure best practices throughout the engagement

A typical IaaS Quickstart engagement takes three to six weeks to complete, depending upon the complexity of the systems targeted for migration to Azure. Cloud 9 will work closely with you to scope the effort and develop an effort/cost estimate.

If you would like learn more about the QuickStart for Microsoft Azure IaaS call us at 1-855 2 CLOUD 9.

Posted in Cloud Computing, Windows Azure | Leave a comment

Azure Import/Export service now generally available

You can use the Microsoft Azure Import/Export service to transfer large amounts of file data to Azure Blob storage in situations where uploading over the network is prohibitively expensive or not feasible. You can also use the Import/Export service to transfer large quantities of data resident in Blob storage to your on-premises installations in a timely and cost-effective manner.

To transfer a large set of file data into Blob storage, you can send one or more hard drives containing that data to an Azure data center, where your data will be uploaded to your storage account. Similarly, to export data from Blob storage, you can send empty hard drives to an Azure data center, where the Blob data from your storage account will be copied to your hard drives and then returned to you. Before you send in a drive that contains data, you’ll encrypt the data on the drive; when the Import/Export service exports your data to send to you, the data will also be encrypted before shipping.

You can create and manage import and export jobs in one of two ways:

Read More: http://azure.microsoft.com/en-us/documentation/articles/storage-import-export-service/

 

Link | Posted on by | Leave a comment