Accessing and Using Azure VM Unique ID

Microsoft has recently added support for Azure VM unique ID. Azure VM unique ID is a 128bits identifier that is encoded and stored in all Azure IaaS VM’ SMBIOS and can be read using platform BIOS commands.

This identifier can be used in different scenarios whether the VM is running on Azure or on-premises and can help your licensing, reporting or general tracking requirements you may have on your Azure IaaS deployments. Many independent software vendors and partners building applications and certifying them on Azure may require to identify an Azure VM throughout its lifecycle and to tell if the VM is running on Azure, on-Premises or on other cloud providers. This platform identifier can for example help detect if the software is properly licensed or help to correlate any VM data to its source such as to assist on setting the right metrics for the right platform and to track and correlate these metrics amongst other uses.

This unique identifier cannot be modified, it can only be queried. Although only new VMs created after 09/18/2014 will have this capability enabled upon creation, VMs that were created prior to 09/18/2014, will automatically get this capability upon next VM restart. Azure Unique VM ID won’t change upon reboot, shutdown (either planned for unplanned), start/stop de-allocate, service healing or restore in place. However, if the VM is a snapshot and copied to create a new instance, Azure VM ID will get changed.

If you have older VMs created and running since this new feature got rolled out (9/18/2014), please restart your VM to automatically get a unique ID which you can start using upon reset.

To access Azure VM ID from within the VM, follow these steps:

Step 1 – Create a VM (only VMs created after 9/18/2014 will have this feature enabled by default)

Follow these steps: How to Create a Custom Virtual Machine

Step 2 – Connect to the VM

Follow these steps: How to Log on to a Virtual Machine Running Windows Server or How to Log on to a Virtual Machine Running Linux

Step 3 – Query VM Unique ID

Linux VMs

Command (Ubuntu):

sudo dmidecode | grep UUID

Example Expected Results:

UUID: 090556DA-D4FA-764F-A9F1-63614EDA019A

Due to Big Endian bit ordering, the actual unique VM ID in this case will be:

DA 56 05 09  FA D4  4f 76 - A9F1-63614EDA019A

Windows VMs

Commands:

$systemEnclosure = Get-WmiObject -class Win32_SystemEnclosure -namespace root\CIMV2
$computerSystemProduct = Get-WmiObject -class Win32_ComputerSystemProduct -namespace root\CIMV2

'BIOS GUID:             "{0}"' -f $computerSystemProduct.UUID

Example expected results: BIOS GUID:

"BC5BCA47-A882-4096-BB2D-D76E6C170534"

No matter what your platform or applications requirements on Azure VM, you can now have access to an Azure VM unique identifier.

If you have older VMs created and running since this new feature got rolled out, please restart your VM to automatically get a unique ID which you can start using upon reset.

Posted in Uncategorized | Leave a comment

Azure Websites Virtual Network Integration

Azure Websites is happy to announce support for integration between your Azure VNET and your Azure Websites. The Virtual Network feature grants your website access to resources running your VNET that includes being able to access web services or databases running on your Azure Virtual Machines. If your VNET is connected to your on premise network with Site to Site VPN, then your Azure Website will now be able to access on premise systems through the Azure Websites Virtual Network feature.

Azure Websites Virtual Network integration requires your Azure virtual network to have a Dynamic routing gateway and to have Point to Site enabled. The feature is being released now in preview and is currently available only at the Standard tier. Standard web hosting plans can have up to 5 networks connected, while a website can only be connected to one network. However, there is no restriction on the number of websites that can be connected to a network.

This Virtual Network feature is accessible through the new Azure Preview portal and shows up alongside Hybrid Connections.
Websites Virtual Networks

Through the new user interface, you can connect to a pre-existing Azure VNET or can create a new one. This capability to attach to an Azure VNET is not something that must be done when creating the website but can be added, changed or removed at any point. The only restrictions are that you must be in the correct pricing plan and that you have not met your quota limit for the plan.

Websites Add virtual network

The Virtual Network feature supports both TCP and UDP protocols and will work with your VNET DNS. Hybrid Connections and Virtual Network are compatible such that you can mix both in in the same website. It is important to note that while there are some use cases that overlap between Virtual Network and Hybrid Connections, they both offer separate things that are very useful in and of themselves.

Hybrid Connections offer the ability to access a remote application. The Hybrid Connections agent can be deployed in any network and connects back to Azure. This provides an ability to access application endpoints in multiple networks and does not depend on configuring a VNET to do so. Virtual Network gives access to all the resources in the VNET and does not require installation of an agent to do so. The Azure Network Site to Site VPN allows enterprises to connect their on premise networks to their Azure Network using the tools they are used to. Both features offer important capabilities and complete the remote data access story for Azure Websites.

 SR. Program Manager, Azure Websites

Posted in Uncategorized | Leave a comment

PowerShell Seen as Key to Azure Automation

The Azure Automation service currently depends on “20 Azure Automation cmdlets,” which allow the scripting of repeatable actions or runbooks on the Microsoft Azure cloud platform. However, when Microsoft releases the Azure Automation service as a product, then that number will double, enabling full automation via PowerShell.

“By Azure Automation’s general availability, we expect to have around 40 cmdlets to allow complete control of Azure Automation via PowerShell,” Microsoft explained.

The 20 currently available Azure Automation cmdlets are listed at this page. Other Azure script resources can be found in Microsoft’s Script Center.The new complete PowerShell management capability will be important not just for automating Microsoft Azure workloads. Microsoft has also claimed that it will enable integration and automation with other cloud services as well.

Setup of Azure Automation seems a bit cumbersome at present. It requires passing the same Automation account name parameter into each Azure Automation cmdlet. That process can be smoothed somewhat by the so-called PowerShell “splatting” technique, Microsoft’s announcement explained. Splatting is a way of passing parameter values to a PowerShell command as a single unit. The technique is described in this TechNet library article.

In any case, PowerShell has been Microsoft’s answer to organizations that may have used graphical user interfaces (GUIs) in the past for repeated tasks but found themselves hobbled by the GUI in larger computing environments. It’s thought to be “easier” to automate runbooks in datacenters by using the command line interface scripting environment of PowerShell vs. using a GUI. Sometimes, the GUI just can’t do the job.

Microsoft has a few other tools besides the Azure Automation Portal that can be used for setting up the automation of Azure tasks. System Center 2012 Service Management Automation handles runbooks the same as Azure Automation, according to Microsoft’s “Runbooks Concepts” description. System Center 2012 Orchestrator also can be used for automation. Orchestrator doesn’t require scripting to set up the workflows, so Microsoft advocates its use if an organization isn’t using the Windows Azure Pack (a bundle of Microsoft Azure integration technologies for enterprises and service providers).

It’s not clear when Microsoft plans to release the Azure Automation service, but it’s expected to cost about $20 per month for the standard version when available. A free but limited version is also part of Microsoft’s release plans. 

 

By Kurt Mackie

 

Posted in Cloud Computing | Leave a comment

Microsoft Cloud – Which SSO option is right for your company?

As IT organizations begin to implement Microsoft Cloud services, the need for a single sign-on capability increases.   Single sign-on, or SSO, allows users to login once with their account and password and gain access all of their systems without having to login in again to each of those systems.   It significantly reduces administrative costs while increasing user productivity.

The different Microsoft Cloud subscriptions (Office 365, Intune, Azure, etc.) all leverage directories hosted by Windows Azure Active Directory.   The directories are created when you setup a subscription and provide a name.   The suffix .onmicrosoft.com is appended to the name you provide and becomes the domain name for users added to the service (e.g. AcmeOffice365.onmicrosoft.com).    

Most companies want to leverage their own domain (e.g. user@acme.com) for user accounts and use those credentials for access to email and other services.  SSO can be setup across your on-premises and cloud Microsoft environments, but since customers have so many different configurations and requirements, designing and configuring an SSO solution can be very complex.  

There are three different ways to achieve single sign-on with Microsoft Cloud services and Windows Azure Active Directory. Each alternative fits an organization’s particular environment and/or requirements.  

No Synchronization

With this alternative, all accounts are created and maintained in Windows Azure Active Directory. Users authenticate through https://login.microsoftonline.com with their organizational account, which is <user name>@<company domain> because the company domain is added to the subscription through the Office 365 or Azure portals.   This SSO alternative is called “No Synchronization” because there is only one directory and therefore no synchronization with an on-premises domain.

Setup for this SSO alternative is the simplest. The company’s domain is first verified through DNS and a new directory is created in Windows Azure Active Directory for the domain.   The directory is marked as the default directory and users are added to that directory.   Exchange Online and DNS are configured to use that domain name.

This option is a good alternative for organizations that are cutting over entirely to cloud based services. It doesn’t require any on-premises components and there is no synchronization to another directory service. It is not a feasible SSO alternative for organizations maintaining on-premises IT services.

Directory and Password Synchronization

Most companies are implementing specific cloud-based services, not necessarily transitioning all services to the cloud. Therefore, they typically have existing Active Directory domains and on-premises IT services that will be maintained going forward.    

With this alternative, accounts and password are maintained in the on premises Active Directory. The account information and password hash values are synchronized to the directory in Windows Azure Active Directory.   This is not actually an SSO solution, it is a “same sign-on” solution. Users that access local resources are authenticated locally by a domain controller, and if that same user accesses a Microsoft cloud resource, they are authenticated again in the cloud. The benefit to them that they use the same logon and password as they do on premises.

Setting up involves:

  • Verifying and registering your domain with Windows Azure Active Directory
  • Installing and configuring DirSync for directory and password synchronization
  • Licensing synched accounts for cloud services

This option is easy to configure and allows users to leverage their same credentials for all IT services. It doesn’t provide a true single sign-on experience and therefore some features, like Exchange free/busy will not work seamlessly.

Directory Synchronization and Active Directory Federation Services

This alternative is the only true SSO solution for users that require access to both on-premises and cloud systems. It is also required to ensure all functionality is available to users of Hybrid Exchange and SharePoint environments. This alternative involves implementing a security token service (typically an ADFS farm) that trusts a federated domain in Windows Azure Active Directory.  All logons are redirected to ADFS and ADFS issues security tokens that are then passed to the trusting services.  

Setting up involves:

  • Obtaining domain certificate from trusted authority
  • Verifying and registering your domain with Windows Azure Active Directory
  • Configuring your domain for federation
  • Installing and configuring DirSync for directory sync
  • Licensing synched accounts for cloud services
  • Installing and configuring ADFS servers
  • Installing and configuring ADFS proxy servers

The option provides a seamless single sign-on experience, but involves a much greater effort to plan, implement and operate.  

Hopefully this has provided you some good information to help you understand your SSO options for Microsoft Cloud services.   Cloud 9 has developed an SSO QuickStart service offering geared toward quickly implementing a Microsoft single sign-on solution. Give us at 1-855 2 CLOUD 9 to learn more about this or other cloud services.

Posted in Uncategorized, Windows Azure, Cloud Computing, SQL Azure | Leave a comment

Trouble getting started with Microsoft Azure?

Cloud 9 has helped several organizations save money and become more agile by moving their on-premises systems to Microsoft Azure, both Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) solutions. We’ve found that while PaaS solutions provide the greatest long term benefit, IaaS solutions are a quicker path to the cloud. You realize the cost savings sooner, gain valuable experience operating cloud services, and are better positioned to move to PaaS in the future.
We have taken the experience gained and intellectual property created over the years and refined them into a services offering for our customers. The QuickStart for Microsoft Azure IaaS is a short, focused engagement that follows a proven methodology to access, design, plan, and execute your migration to an IaaS cloud solution.
The engagement is led by a Cloud 9 Azure consultant that works closely with your IT resources. The key engagement deliverables are:

  • A detailed assessment outlining issues/risks with suggested remediation/mitigation steps
  • A cloud architecture design identifying Azure components, estimated monthly run costs  and key operational processes
  • Detailed migration plan addressing cloud configuration and system cut-over tasks
  • Your IT service running as Microsoft Windows Azure IaaS solution
  • Knowledge transfer on Microsoft Windows Azure best practices throughout the engagement

A typical IaaS Quickstart engagement takes three to six weeks to complete, depending upon the complexity of the systems targeted for migration to Azure. Cloud 9 will work closely with you to scope the effort and develop an effort/cost estimate.

If you would like learn more about the QuickStart for Microsoft Azure IaaS call us at 1-855 2 CLOUD 9.

Posted in Cloud Computing, Windows Azure | Leave a comment

Azure Import/Export service now generally available

You can use the Microsoft Azure Import/Export service to transfer large amounts of file data to Azure Blob storage in situations where uploading over the network is prohibitively expensive or not feasible. You can also use the Import/Export service to transfer large quantities of data resident in Blob storage to your on-premises installations in a timely and cost-effective manner.

To transfer a large set of file data into Blob storage, you can send one or more hard drives containing that data to an Azure data center, where your data will be uploaded to your storage account. Similarly, to export data from Blob storage, you can send empty hard drives to an Azure data center, where the Blob data from your storage account will be copied to your hard drives and then returned to you. Before you send in a drive that contains data, you’ll encrypt the data on the drive; when the Import/Export service exports your data to send to you, the data will also be encrypted before shipping.

You can create and manage import and export jobs in one of two ways:

Read More: http://azure.microsoft.com/en-us/documentation/articles/storage-import-export-service/

 

Link | Posted on by | Leave a comment

Azure HDInsight previewing HBase clusters as a NoSQL database on Azure Blobs

On June 3, Microsoft announced an update to HDInsight to support Hadoop 2.4 for 100x faster queries. Today, we are announcing the preview of Apache HBase clusters inside HDInsight.

HBase is a low-latency NoSQL database that allows online transactional processing (OLTP) of big data. HBase is offered as a managed cluster integrated into the Azure environment. The clusters are configured to store data directly in Azure Blob storage that provides low latency and elasticity between performance and cost. This enables customers to build interactive websites that work with large datasets, to build services that store sensor and telemetry data from millions of end points, and to analyze this data with Hadoop jobs.

How To Create a HBase cluster

To try HBase during the preview, PowerShell should be leveraged.

1. Install Windows Azure PowerShell

2. Setup Environment

3. Capture cluster credentials in a variable

PS C:\> $creds = Get-Credential

4. Create HBase cluster:

PS C:\> New-AzureHDInsightCluster -Name yourclustername -ClusterType HBase -Version 3.0 -Location “West US” `

-DefaultStorageAccountName yourstorageaccount.blob.core.windows.net -DefaultStorageAccountKey “yourstorageaccountkey” `

-DefaultStorageContainerName hbasecontainername -Credential $creds -ClusterSizeInNodes 4

Manipulating Data in HBase Cluster

Application developers can access HBase data through REST APIs, HBase shell and different types of map reduce jobs like Hive and Pig. HBase shell provides interactive console to manage HBase cluster, create and drop tables and manipulate data in them.

1. To open HBase shell first enable RDP connection to the cluster and connect to it

After the cluster is created it will appear in the Azure Portal under HDInsight service

Open the CONFIGURATION tab of the cluster.

Click on the ENABLE REMOTE button at the bottom of the page to enable the RDP connection to the cluster.

Click on the CONNECT button at the bottom of the CONFIGURATION tab.
clip_image002

2. Open the HBase Shell

Within your RDP session, click on the Hadoop command prompt shortcut located on the desktop.

Open the HBase shell:

cd %HBASE_HOME%\bin

hbase shell

3. Create a sample table, add a row to the table and list the rows in the table:

create ‘sampletable’, ‘cf1′

put ‘sampletable’, ‘row1′, ‘cf1:col1′, ‘value1′

scan ‘sampletable’

Posted in Uncategorized | Leave a comment