You are on page 1of 28

Contents

Azure Storage Documentation


Overview
Introduction
Choose Blobs, Files, or Disks
Blobs
Data Lake Storage Gen2
Files
Queues
Tables
Disks
Windows
Linux
Introduction to Azure Storage
8/6/2018 • 12 minutes to read • Edit Online

Azure Storage is Microsoft's cloud storage solution for modern data storage scenarios. Azure Storage offers a
massively scalable object store for data objects, a file system service for the cloud, a messaging store for reliable
messaging, and a NoSQL store. Azure Storage is:
Durable and highly available. Redundancy ensures that your data is safe in the event of transient hardware
failures. You can also opt to replicate data across datacenters or geographical regions for additional protection
from local catastrophe or natural disaster. Data replicated in this way remains highly available in the event of an
unexpected outage.
Secure. All data written to Azure Storage is encrypted by the service. Azure Storage provides you with fine-
grained control over who has access to your data.
Scalable. Azure Storage is designed to be massively scalable to meet the data storage and performance needs
of today's applications.
Managed. Microsoft Azure handles maintenance and any critical problems for you.
Accessible. Data in Azure Storage is accessible from anywhere in the world over HTTP or HTTPS. Microsoft
provides SDKs for Azure Storage in a variety of languages -- .NET, Java, Node.js, Python, PHP, Ruby, Go, and
others -- as well as a mature REST API. Azure Storage supports scripting in Azure PowerShell or Azure CLI.
And the Azure portal and Azure Storage Explorer offer easy visual solutions for working with your data.

Azure Storage services


Azure Storage includes these data services:
Azure Blobs: A massively scalable object store for text and binary data.
Azure Files: Managed file shares for cloud or on-premises deployments.
Azure Queues: A messaging store for reliable messaging between application components.
Azure Tables: A NoSQL store for schemaless storage of structured data.
Each service is accessed through a storage account. To get started, see Create a storage account.

Blob storage
Azure Blob storage is Microsoft's object storage solution for the cloud. Blob storage is optimized for storing
massive amounts of unstructured data, such as text or binary data.
Blob storage is ideal for:
Serving images or documents directly to a browser.
Storing files for distributed access.
Streaming video and audio.
Storing data for backup and restore, disaster recovery, and archiving.
Storing data for analysis by an on-premises or Azure-hosted service.
Objects in Blob storage can be accessed from anywhere in the world via HTTP or HTTPS. Users or client
applications can access blobs via URLs, the Azure Storage REST API, Azure PowerShell, Azure CLI, or an Azure
Storage client library. The storage client libraries are available for multiple languages, including .NET, Java, Node.js,
Python, PHP, and Ruby.
For more information about Blob storage, see Introduction to object storage on Azure.

Azure Files
Azure Files enables you to set up highly available network file shares that can be accessed by using the standard
Server Message Block (SMB ) protocol. That means that multiple VMs can share the same files with both read and
write access. You can also read the files using the REST interface or the storage client libraries.
One thing that distinguishes Azure Files from files on a corporate file share is that you can access the files from
anywhere in the world using a URL that points to the file and includes a shared access signature (SAS ) token. You
can generate SAS tokens; they allow specific access to a private asset for a specific amount of time.
File shares can be used for many common scenarios:
Many on-premises applications use file shares. This feature makes it easier to migrate those applications
that share data to Azure. If you mount the file share to the same drive letter that the on-premises application
uses, the part of your application that accesses the file share should work with minimal, if any, changes.
Configuration files can be stored on a file share and accessed from multiple VMs. Tools and utilities used by
multiple developers in a group can be stored on a file share, ensuring that everybody can find them, and that
they use the same version.
Diagnostic logs, metrics, and crash dumps are just three examples of data that can be written to a file share
and processed or analyzed later.
At this time, Active Directory-based authentication and access control lists (ACLs) are not supported, but they will
be at some time in the future. The storage account credentials are used to provide authentication for access to the
file share. This means anybody with the share mounted will have full read/write access to the share.
For more information about Azure Files, see Introduction to Azure Files.

Queue storage
The Azure Queue service is used to store and retrieve messages. Queue messages can be up to 64 KB in size, and a
queue can contain millions of messages. Queues are generally used to store lists of messages to be processed
asynchronously.
For example, say you want your customers to be able to upload pictures, and you want to create thumbnails for
each picture. You could have your customer wait for you to create the thumbnails while uploading the pictures. An
alternative would be to use a queue. When the customer finishes his upload, write a message to the queue. Then
have an Azure Function retrieve the message from the queue and create the thumbnails. Each of the parts of this
processing can be scaled separately, giving you more control when tuning it for your usage.
For more information about Azure Queues, see Introduction to Queues.

Table storage
Azure Table storage is now part of Azure Cosmos DB. To see Azure Table storage documentation, see the Azure
Table Storage Overview. In addition to the existing Azure Table storage service, there is a new Azure Cosmos DB
Table API offering that provides throughput-optimized tables, global distribution, and automatic secondary indexes.
To learn more and try out the new premium experience, please check out Azure Cosmos DB Table API.
For more information about Table storage, see Overview of Azure Table storage.

Disk storage
Azure Storage also includes managed and unmanaged disk capabilities used by virtual machines. For more
information about these features, please see the Compute Service documentation.

Types of storage accounts


This table shows the various kinds of storage accounts and which objects can be used with each.

GENERAL-PURPOSE BLOB STORAGE, HOT AND


TYPE OF STORAGE ACCOUNT STANDARD GENERAL-PURPOSE PREMIUM COOL ACCESS TIERS

Services supported Blob, File, Queue Services Blob Service Blob Service

Types of blobs supported Block blobs, page blobs, and Page blobs Block blobs and append
append blobs blobs

General-purpose storage accounts


There are two kinds of general-purpose storage accounts.
Standard storage
The most widely used storage accounts are standard storage accounts, which can be used for all types of data.
Standard storage accounts use magnetic media to store data.
Premium storage
Premium storage provides high-performance storage for page blobs, which are primarily used for VHD files.
Premium storage accounts use SSD to store data. Microsoft recommends using Premium Storage for all of your
VMs.
Blob Storage accounts
The Blob Storage account is a specialized storage account used to store block blobs and append blobs. You can't
store page blobs in these accounts, therefore you can't store VHD files. These accounts allow you to set an access
tier to Hot or Cool; the tier can be changed at any time.
The hot access tier is used for files that are accessed frequently -- you pay a higher cost for storage, but the cost of
accessing the blobs is much lower. For blobs stored in the cool access tier, you pay a higher cost for accessing the
blobs, but the cost of storage is much lower.

Accessing your blobs, files, and queues


Each storage account has two authentication keys, either of which can be used for any operation. There are two
keys so you can roll over the keys occasionally to enhance security. It is critical that these keys be kept secure
because their possession, along with the account name, allows unlimited access to all data in the storage account.
This section looks at two ways to secure the storage account and its data. For detailed information about securing
your storage account and your data, see the Azure Storage security guide.
Securing access to storage accounts using Azure AD
One way to secure access to your storage data is by controlling access to the storage account keys. With Resource
Manager Role-Based Access Control (RBAC ), you can assign roles to users, groups, or applications. These roles are
tied to a specific set of actions that are allowed or disallowed. Using RBAC to grant access to a storage account
only handles the management operations for that storage account, such as changing the access tier. You can't use
RBAC to grant access to data objects like a specific container or file share. You can, however, use RBAC to grant
access to the storage account keys, which can then be used to read the data objects.
Securing access using shared access signatures
You can use shared access signatures and stored access policies to secure your data objects. A shared access
signature (SAS ) is a string containing a security token that can be attached to the URI for an asset that allows you
to delegate access to specific storage objects and to specify constraints such as permissions and the date/time
range of access. This feature has extensive capabilities. For detailed information, refer to Using Shared Access
Signatures (SAS ).
Public access to blobs
The Blob Service allows you to provide public access to a container and its blobs, or a specific blob. When you
indicate that a container or blob is public, anyone can read it anonymously; no authentication is required. An
example of when you would want to do this is when you have a website that is using images, video, or documents
from Blob storage. For more information, see Manage anonymous read access to containers and blobs

Encryption
There are two basic kinds of encryption available for the Storage services. For more information about security and
encryption, see the Azure Storage security guide.
Encryption at rest
Azure Storage Service Encryption (SSE ) at rest helps you protect and safeguard your data to meet your
organizational security and compliance commitments. With this feature, Azure Storage automatically encrypts your
data prior to persisting to storage and decrypts prior to retrieval. The encryption, decryption, and key management
are totally transparent to users.
SSE automatically encrypts data in all performance tiers (Standard and Premium), all deployment models (Azure
Resource Manager and Classic), and all of the Azure Storage services (Blob, Queue, Table, and File). SSE does not
affect Azure Storage performance.
For more information about SSE encryption at rest, see Azure Storage Service Encryption for Data at Rest.
Client-side encryption
The storage client libraries have methods you can call to programmatically encrypt data before sending it across
the wire from the client to Azure. It is stored encrypted, which means it also is encrypted at rest. When reading the
data back, you decrypt the information after receiving it.
For more information about client-side encryption, see Client-Side Encryption with .NET for Microsoft Azure
Storage.

Replication
In order to ensure that your data is durable, Azure Storage replicates multiple copies of your data. When you set up
your storage account, you select a replication type. In most cases, this setting can be modified after the storage
account has been created.
Replication options for a storage account include:
Locally-redundant storage (LRS ): The simplest, low -cost replication strategy that Azure Storage offers.
Zone-redundant storage (ZRS ): A simple option for high availability and durability.
Geo-redundant storage (GRS ): Cross-regional replication to protect against region-wide unavailability.
Read-access geo-redundant storage (RA-GRS ): Cross-regional replication with read access to the replica.
For disaster recovery information, see What to do if an Azure Storage outage occurs.

Transferring data to and from Azure Storage


You can use the AzCopy command-line utility to copy blob, and file data within your storage account or across
storage accounts. See one of the following articles for help:
Transfer data with AzCopy for Windows
Transfer data with AzCopy for Linux
AzCopy is built on top of the Azure Data Movement Library, which is currently available in preview.
The Azure Import/Export service can be used to import or export large amounts of blob data to or from your
storage account. You prepare and mail multiple hard drives to an Azure data center, where they will transfer the
data to/from the hard drives and send the hard drives back to you. For more information about the Import/Export
service, see Use the Microsoft Azure Import/Export Service to Transfer Data to Blob Storage.
To import large amounts of blob data to your storage account in a quick, inexpensive, and reliable way, you can
also use Azure Data Box Disk. Microsoft ships up to 5 encrypted solid-state disks (SSDs) with a 40 TB capacity, to
your datacenter through a regional carrier. You quickly configure the disks, copy data to disks over a USB
connection, and ship the disks back to Azure. In the Azure datacenter, your data is automatically uploaded from
drives to the cloud. For more information about this solution, go to Azure Data Box Disk overview.

Pricing
For detailed information about pricing for Azure Storage, see the Pricing page.

Storage APIs, libraries, and tools


Azure Storage resources can be accessed by any language that can make HTTP/HTTPS requests. Additionally,
Azure Storage offers programming libraries for several popular languages. These libraries simplify many aspects of
working with Azure Storage by handling details such as synchronous and asynchronous invocation, batching of
operations, exception management, automatic retries, operational behavior, and so forth. Libraries are currently
available for the following languages and platforms, with others in the pipeline:
Azure Storage data API and library references
Storage Services REST API
Storage Client Library for .NET
Storage Client Library for Java/Android
Storage Client Library for Node.js
Storage Client Library for Python
Storage Client Library for PHP
Storage Client Library for Ruby
Storage Client Library for C++
Azure Storage management API and library references
Storage Resource Provider REST API
Storage Resource Provider Client Library for .NET
Storage Service Management REST API (Classic)
Azure Storage data movement API and library references
Storage Import/Export Service REST API
Storage Data Movement Client Library for .NET
Tools and utilities
Azure PowerShell Cmdlets for Storage
Azure CLI Cmdlets for Storage
AzCopy Command-Line Utility
Azure Storage Explorer is a free, standalone app from Microsoft that enables you to work visually with Azure
Storage data on Windows, macOS, and Linux.
Azure Storage Client Tools
Azure Developer Tools
Next steps
To get up and running with Azure Storage, see Create a storage account.
Deciding when to use Azure Blobs, Azure Files, or Azure
Disks
8/6/2018 • 3 minutes to read • Edit Online

Microsoft Azure provides several features in Azure Storage for storing and accessing your data in the cloud. This article covers
Azure Files, Blobs, and Disks, and is designed to help you choose between these features.

Scenarios
The following table compares Files, Blobs, and Disks, and shows example scenarios appropriate for each.

FEATURE DESCRIPTION WHEN TO USE

Azure Files Provides an SMB interface, client libraries, and You want to "lift and shift" an application to
a REST interface that allows access from the cloud which already uses the native file
anywhere to stored files. system APIs to share data between it and
other applications running in Azure.

You want to store development and


debugging tools that need to be accessed
from many virtual machines.

Azure Blobs Provides client libraries and a REST interface You want your application to support
that allows unstructured data to be stored streaming and random access scenarios.
and accessed at a massive scale in block
blobs. You want to be able to access application data
from anywhere.

Azure Disks Provides client libraries and a REST interface You want to lift and shift applications that use
that allows data to be persistently stored and native file system APIs to read and write data
accessed from an attached virtual hard disk. to persistent disks.

You want to store data that is not required to


be accessed from outside the virtual machine
to which the disk is attached.

Comparison: Files and Blobs


The following table compares Azure Files with Azure Blobs.

Attribute Azure Blobs Azure Files

Durability options LRS, ZRS, GRS, RA-GRS LRS, ZRS, GRS

Accessibility REST APIs REST APIs

SMB 2.1 and SMB 3.0 (standard file system


APIs)

Connectivity REST APIs -- Worldwide REST APIs - Worldwide

SMB 2.1 -- Within region

SMB 3.0 -- Worldwide

Endpoints http://myaccount.blob.core.windows.net/mycontainer/myblob
\\myaccount.file.core.windows.net\myshare\myfile.txt

http://myaccount.file.core.windows.net/myshare/myfile.t
Directories Flat namespace True directory objects

Case sensitivity of names Case sensitive Case insensitive, but case preserving

Capacity Up to 500 TiB containers 5 TiB file shares

Throughput Up to 60 MiB/s per block blob Up to 60 MiB/s per share

Object Size Up to about 4.75 TiB per block blob Up to 1 TiB per file

Billed capacity Based on bytes written Based on file size

Client libraries Multiple languages Multiple languages

Comparison: Files and Disks


Azure Files complement Azure Disks. A disk can only be attached to one Azure Virtual Machine at a time. Disks are fixed-format
VHDs stored as page blobs in Azure Storage, and are used by the virtual machine to store durable data. File shares in Azure Files
can be accessed in the same way as the local disk is accessed (by using native file system APIs), and can be shared across many
virtual machines.
The following table compares Azure Files with Azure Disks.

Attribute Azure Disks Azure Files

Scope Exclusive to a single virtual machine Shared access across multiple virtual
machines

Snapshots and Copy Yes Yes

Configuration Connected at startup of the virtual machine Connected after the virtual machine has
started

Authentication Built-in Set up with net use

Cleanup Automatic Manual

Access using REST Files within the VHD cannot be accessed Files stored in a share can be accessed

Max Size 4 TiB disk 5 TiB File Share and 1 TiB file within share

Max 8KB IOps 500 IOps 1000 IOps

Throughput Up to 60 MiB/s per Disk Up to 60 MiB/s per File Share

Next steps
When making decisions about how your data is stored and accessed, you should also consider the costs involved. For more
information, see Azure Storage Pricing.
Some SMB features are not applicable to the cloud. For more information, see Features not supported by the Azure File service.
For more information about disks, see Managing disks and images and How to Attach a Data Disk to a Windows Virtual Machine.
Introduction to object storage in Azure
8/1/2018 • 2 minutes to read • Edit Online

Azure Blob storage is Microsoft's object storage solution for the cloud. Blob storage is optimized for storing
massive amounts of unstructured data, such as text or binary data.
Blob storage is ideal for:
Serving images or documents directly to a browser.
Storing files for distributed access.
Streaming video and audio.
Writing to log files.
Storing data for backup and restore, disaster recovery, and archiving.
Storing data for analysis by an on-premises or Azure-hosted service.
Objects in Blob storage can be accessed from anywhere in the world via HTTP or HTTPS. Users or client
applications can access blobs via URLs, the Azure Storage REST API, Azure PowerShell, Azure CLI, or an Azure
Storage client library. The storage client libraries are available for multiple languages, including .NET, Java,
Node.js, Python, PHP, and Ruby.

Blob service concepts


Blob storage exposes three resources: your storage account, the containers in the account, and the blobs in a
container. The following diagram shows the relationship between these resources.

Storage Account
All access to data objects in Azure Storage happens through a storage account. For more information, see About
Azure storage accounts.
Container
A container organizes a set of blobs, similar to a folder in a file system. All blobs reside within a container. A
storage account can contain an unlimited number of containers, and a container can store an unlimited number of
blobs. Note that the container name must be lowercase.
Blob
Azure Storage offers three types of blobs -- block blobs, append blobs, and page blobs (used for VHD files).
Block blobs store text and binary data, up to about 4.7 TB. Block blobs are made up of blocks of data that can
be managed individually.
Append blobs are made up of blocks like block blobs, but are optimized for append operations. Append blobs
are ideal for scenarios such as logging data from virtual machines.
Page blobs store random access files up to 8 TB in size. Page blobs store the VHD files that back VMs.
All blobs reside within a container. A container is similar to a folder in a file system. You can further organize blobs
into virtual directories, and traverse them as you would a file system.
For very large datasets where network constraints make uploading or downloading data to Blob storage over the
wire unrealistic, you can ship a set of hard drives to Microsoft to import or export data directly from the data
center. For more information, see Use the Microsoft Azure Import/Export Service to Transfer Data to Blob
Storage.
For details about naming containers and blobs, see Naming and Referencing Containers, Blobs, and Metadata.

Next steps
Create a storage account
Getting started with Blob storage using .NET
Azure Storage samples using .NET
Azure Storage samples using Java
Introduction to Azure Data Lake Storage Gen2
Preview
8/6/2018 • 4 minutes to read • Edit Online

Azure Data Lake Storage Gen2 Preview is a set of capabilities dedicated to big data analytics, built on top of Azure
Blob storage. It allows you to interface with your data using both file system and object storage paradigms. This
makes Data Lake Storage Gen2 the only cloud-based multi-modal storage service, allowing you to extract analytics
value from all of your data.
Data Lake Storage Gen2 features all qualities that are required for the full lifecycle of analytics data. This results
from converging the capabilities of our two existing storage services. Features from Azure Data Lake Storage
Gen1, such as file system semantics, file-level security and scale are combined with low -cost, tiered storage, high
availability/disaster recovery capabilities and a large SDK/tooling ecosystem from Azure Blob storage. In Data
Lake Storage Gen2, all the qualities of object storage remain while adding the advantages of a file system interface
optimized for analytics workloads.

Designed for enterprise big data analytics


Data Lake Storage Gen2 is the foundational storage service for building enterprise data lakes (EDL ) on Azure.
Designed from the start to service multiple petabytes of information while sustaining hundreds of gigabits of
throughput, Data Lake Storage Gen2 gives you an easy way to manage massive amounts of data.
A fundamental feature of Data Lake Storage Gen2 is the addition of a hierarchical namespace to the Blob storage
service which organizes objects/files into a hierarchy of directories for performant data access. The hierarchical
namespace also enables Data Lake Storage Gen2 to support both object store and file system paradigms at the
same time. For instance, a common object store naming convention uses slashes in the name to mimic a
hierarchical folder structure. This structure becomes real with Data Lake Storage Gen2. Operations such as
renaming or deleting a directory become single atomic metadata operations on the directory rather than
enumerating and processing all objects that share the name prefix of the directory.
In the past, cloud-based analytics had to compromise in areas of performance, management, and security. Data
Lake Storage Gen2 addresses each of these aspects in the following ways:
Performance is optimized because you do not need to copy or transform data as a prerequisite for analysis.
The hierarchical namespace greatly improves the performance of directory management operations which
improves overall job performance.
Management is easier because you can organize and manipulate files through directories and
subdirectories.
Cost effectiveness is made possible as Data Lake Storage Gen2 is built on top of the low -cost Azure Blob
storage. The additional features further lower the total cost of ownership for running big data analytics on
Azure.

Key features of Data Lake Storage Gen2


NOTE
During the public preview of Data Lake Storage Gen2, some of the features listed below may vary in their availability. As new
features and regions are released during the preview program, this information will be communicated. Sign up to the public
preview of Data Lake Storage Gen2.

Hadoop compatible access: Data Lake Storage Gen2 allows you to manage and access data just as you
would with a Hadoop Distributed File System (HDFS ). The new ABFS driver is available within all Apache
Hadoop environments, including Azure HDInsight and Azure Databricks to access data stored in Data Lake
Storage Gen2.
Multi-protocol and multi-modal data access: Data Lake Storage Gen2 is considered a multi-modal
storage service as it provides both object store and file system interfaces to the same data at the same
time. This is achieved by providing multiple protocol endpoints that are able to access the same data.
Unlike other analytics solutions, data stored in Data Lake Storage Gen2 does not need to move or be
transformed before you can run a variety of analytics tools. You can access data via traditional Blob storage
APIs (for example: ingest data via Event Hubs Capture) and process that data using HDInsight or Azure
Databricks at the same time.
Cost effective: Data Lake Storage Gen2 features low -cost storage capacity and transactions. As data
transitions through its complete lifecycle, billing rates change keeping costs to a minimum via built-in
features such as Azure Blob storage lifecycle.
Works with Blob storage tools, frameworks, and apps: Data Lake Storage Gen2 continues to work with
a wide array of tools, frameworks, and applications that exist today for Blob storage.
Optimized driver: The abfs driver is optimized specifically for big data analytics. The corresponding REST
APIs are surfaced through the dfs endpoint, dfs.core.windows.net .

Scalability
Azure Storage is scalable by design whether you access via Data Lake Storage Gen2 or Blob storage interfaces. It is
able to store and serve many exabytes of data. This amount of storage is available with throughput measured in
gigabits per second (Gbps) at high levels of input/output operations per second (IOPS ). Beyond just persistence,
processing is executed at near-constant per-request latencies that are measured at the service, account, and file
levels.

Cost effectiveness
One of the many benefits of building Data Lake Storage Gen2 on top of Azure Blob storage is the low -cost of
storage capacity and transactions. Unlike other cloud storage services, Data Lake Storage Gen2 lowers costs
because data is not required to be moved or transformed prior to performing analysis.
Additionally, features such as the hierarchical namespace significantly improve the overall performance of many
analytics jobs. This improvement in performance means that you require less compute power to process the same
amount of data, resulting in a lower total cost of ownership (TCO ) for the end-to-end analytics job.

Next steps
The following articles describe some of the main concepts of Data Lake Storage Gen2 and detail how to store,
access, manage, and gain insights from your data:
Hierarchical namespace
Create a storage account
Create an HDInsight cluster with Azure Data Lake Storage Gen2
Use an Azure Data Lake Storage Gen2 account in Azure Databricks
Introduction to Azure Files
8/6/2018 • 3 minutes to read • Edit Online

Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server
Message Block (SMB ) protocol. Azure file shares can be mounted concurrently by cloud or on-premises
deployments of Windows, Linux, and macOS. Additionally, Azure file shares can be cached on Windows Servers
with Azure File Sync for fast access near where the data is being used.

Videos
INTRODUCING AZURE FILE SYNC (2 M) AZURE FILES WITH SYNC (IGNITE 2017) (85 M)

Why Azure Files is useful


Azure file shares can be used to:
Replace or supplement on-premises file servers:
Azure Files can be used to completely replace or supplement traditional on-premises file servers or NAS
devices. Popular operating systems such as Windows, macOS, and Linux can directly mount Azure file
shares wherever they are in the world. Azure file shares can also be replicated with Azure File Sync to
Windows Servers, either on-premises or in the cloud, for performance and distributed caching of the data
where it's being used.
"Lift and shift" applications:
Azure Files makes it easy to "lift and shift" applications to the cloud that expect a file share to store file
application or user data. Azure Files enables both the "classic" lift and shift scenario, where both the
application and it's data are moved to Azure, and the "hybrid" lift and shift scenario, where the application
data is moved to Azure Files, and the application continues to run on-premises.
Simplify cloud development:
Azure Files can also be used in numerous ways to simplify new cloud development projects. For example:
Shared application settings:
A common pattern for distributed applications is to have configuration files in a centralized location
where they can be accessed from many application instances. Application instances can load their
configuration through the File REST API, and humans can access them as needed by mounting the
SMB share locally.
Diagnostic share:
An Azure file share is a convenient place for cloud applications to write their logs, metrics, and crash
dumps. Logs can be written by the application instances via the File REST API, and developers can
access them by mounting the file share on their local machine. This enables great flexibility, as
developers can embrace cloud development without having to abandon any existing tooling they
know and love.
Dev/Test/Debug:
When developers or administrators are working on VMs in the cloud, they often need a set of tools
or utilities. Copying such utilities and tools to each VM can be a time consuming exercise. By
mounting an Azure file share locally on the VMs, a developer and administrator can quickly access
their tools and utilities, no copying required.

Key benefits
Shared access. Azure file shares support the industry standard SMB protocol, meaning you can seamlessly
replace your on-premises file shares with Azure file shares without worrying about application compatibility.
Being able to share a file system across multiple machines, applications/instances is a significant advantage
with Azure Files for applications that need shareability.
Fully managed. Azure file shares can be created without the need to manage hardware or an OS. This means
you don't have to deal with patching the server OS with critical security upgrades or replacing faulty hard
disks.
Scripting and tooling. PowerShell cmdlets and Azure CLI can be used to create, mount, and manage Azure
file shares as part of the administration of Azure applications.You can create and manage Azure file shares
using Azure portal and Azure Storage Explorer.
Resiliency. Azure Files has been built from the ground up to be always available. Replacing on-premises file
shares with Azure Files means you no longer have to wake up to deal with local power outages or network
issues.
Familiar programmability. Applications running in Azure can access data in the share via file system I/O
APIs. Developers can therefore leverage their existing code and skills to migrate existing applications. In
addition to System IO APIs, you can use Azure Storage Client Libraries or the Azure Storage REST API.

Next Steps
Create Azure file Share
Connect and mount on Windows
Connect and mount on Linux
Connect and mount on macOS
Introduction to Queues
8/6/2018 • 2 minutes to read • Edit Online

Azure Queue storage is a service for storing large numbers of messages that can be accessed from anywhere in
the world via authenticated calls using HTTP or HTTPS. A single queue message can be up to 64 KB in size, and a
queue can contain millions of messages, up to the total capacity limit of a storage account.

Common uses
Common uses of Queue storage include:
Creating a backlog of work to process asynchronously
Passing messages from an Azure web role to an Azure worker role

Queue service concepts


The Queue service contains the following components:

URL format: Queues are addressable using the following URL format:
https:// <storage account> .queue.core.windows.net/ <queue>
The following URL addresses a queue in the diagram:
https://myaccount.queue.core.windows.net/images-to-download

Storage account: All access to Azure Storage is done through a storage account. See Azure Storage
Scalability and Performance Targets for details about storage account capacity.
Queue: A queue contains a set of messages. All messages must be in a queue. Note that the queue name
must be all lowercase. For information on naming queues, see Naming Queues and Metadata.
Message: A message, in any format, of up to 64 KB. The maximum time that a message can remain in the
queue is seven days.

Next steps
Create a storage account
Getting started with Queues using .NET
Introduction to Table storage in Azure
8/6/2018 • 3 minutes to read • Edit Online

TIP
The content in this article applies to the original Azure Table storage. However, there is now a premium offering for table
storage, the Azure Cosmos DB Table API that offers throughput-optimized tables, global distribution, and automatic
secondary indexes. To learn more and try out the premium experience, please check out Azure Cosmos DB Table API.

Azure Table storage is a service that stores structured NoSQL data in the cloud, providing a key/attribute store
with a schemaless design. Because Table storage is schemaless, it's easy to adapt your data as the needs of your
application evolve. Access to Table storage data is fast and cost-effective for many types of applications, and is
typically lower in cost than traditional SQL for similar volumes of data.
You can use Table storage to store flexible datasets like user data for web applications, address books, device
information, or other types of metadata your service requires. You can store any number of entities in a table, and
a storage account may contain any number of tables, up to the capacity limit of the storage account.

What is Table storage


Azure Table storage stores large amounts of structured data. The service is a NoSQL datastore which accepts
authenticated calls from inside and outside the Azure cloud. Azure tables are ideal for storing structured, non-
relational data. Common uses of Table storage include:
Storing TBs of structured data capable of serving web scale applications
Storing datasets that don't require complex joins, foreign keys, or stored procedures and can be denormalized
for fast access
Quickly querying data using a clustered index
Accessing data using the OData protocol and LINQ queries with WCF Data Service .NET Libraries
You can use Table storage to store and query huge sets of structured, non-relational data, and your tables will
scale as demand increases.

Table storage concepts


Table storage contains the following components:

URL format: Azure Table Storage accounts use this format:


http://<storage account>.table.core.windows.net/<table>
Azure Cosmos DB Table API accounts use this format:
http://<storage account>.table.cosmosdb.azure.com/<table>

You can address Azure tables directly using this address with the OData protocol. For more information,
see OData.org.
Accounts: All access to Azure Storage is done through a storage account. See Azure Storage Scalability
and Performance Targets for details about storage account capacity.
All access to Azure Cosmos DB is done through a Table API account. See Create a Table API account for
details creating a Table API account.
Table: A table is a collection of entities. Tables don't enforce a schema on entities, which means a single table
can contain entities that have different sets of properties.
Entity: An entity is a set of properties, similar to a database row. An entity in Azure Storage can be up to 1MB
in size. An entity in Azure Cosmos DB can be up to 2MB in size.
Properties: A property is a name-value pair. Each entity can include up to 252 properties to store data. Each
entity also has three system properties that specify a partition key, a row key, and a timestamp. Entities with the
same partition key can be queried more quickly, and inserted/updated in atomic operations. An entity's row
key is its unique identifier within a partition.
For details about naming tables and properties, see Understanding the Table Service Data Model.

Next steps
Microsoft Azure Storage Explorer is a free, standalone app from Microsoft that enables you to work visually
with Azure Storage data on Windows, macOS, and Linux.
Get started with Azure Table Storage in .NET
View the Table service reference documentation for complete details about available APIs:
Storage Client Library for .NET reference
REST API reference
About disks storage for Azure Windows VMs
6/11/2018 • 9 minutes to read • Edit Online

Just like any other computer, virtual machines in Azure use disks as a place to store an operating system,
applications, and data. All Azure virtual machines have at least two disks – a Windows operating system disk and a
temporary disk. The operating system disk is created from an image, and both the operating system disk and the
image are virtual hard disks (VHDs) stored in an Azure storage account. Virtual machines also can have one or
more data disks, that are also stored as VHDs.
In this article, we will talk about the different uses for the disks, and then discuss the different types of disks you
can create and use. This article is also available for Linux virtual machines.

NOTE
Azure has two different deployment models for creating and working with resources: Resource Manager and classic. This
article covers using both models, but Microsoft recommends that most new deployments use the Resource Manager model.

Disks used by VMs


Let's take a look at how the disks are used by the VMs.
Operating system disk
Every virtual machine has one attached operating system disk. It's registered as a SATA drive and labeled as the C:
drive by default. This disk has a maximum capacity of 2048 gigabytes (GB ).
Temporary disk
Each VM contains a temporary disk. The temporary disk provides short-term storage for applications and
processes and is intended to only store data such as page or swap files. Data on the temporary disk may be lost
during a maintenance event or when you redeploy a VM. During a successful standard reboot of the VM, the data
on the temporary drive will persist.
The temporary disk is labeled as the D: drive by default and it used for storing pagefile.sys. To remap this disk to a
different drive letter, see Change the drive letter of the Windows temporary disk. The size of the temporary disk
varies, based on the size of the virtual machine. For more information, see Sizes for Windows virtual machines.
For more information on how Azure uses the temporary disk, see Understanding the temporary drive on
Microsoft Azure Virtual Machines
Data disk
A data disk is a VHD that's attached to a virtual machine to store application data, or other data you need to keep.
Data disks are registered as SCSI drives and are labeled with a letter that you choose. Each data disk has a
maximum capacity of 4095 GB. The size of the virtual machine determines how many data disks you can attach to
it and the type of storage you can use to host the disks.

NOTE
For more information about virtual machines capacities, see Sizes for Windows virtual machines.

Azure creates an operating system disk when you create a virtual machine from an image. If you use an image that
includes data disks, Azure also creates the data disks when it creates the virtual machine. Otherwise, you add data
disks after you create the virtual machine.
You can add data disks to a virtual machine at any time, by attaching the disk to the virtual machine. You can use
a VHD that you've uploaded or copied to your storage account, or use an empty VHD that Azure creates for you.
Attaching a data disk associates the VHD file with the VM by placing a 'lease' on the VHD so it can't be deleted
from storage while it's still attached.

About VHDs
The VHDs used in Azure are .vhd files stored as page blobs in a standard or premium storage account in Azure.
For details about page blobs, see Understanding block blobs and page blobs. For details about premium storage,
see High-performance premium storage and Azure VMs.
Azure supports the fixed disk VHD format. The fixed format lays the logical disk out linearly within the file, so that
disk offset X is stored at blob offset X. A small footer at the end of the blob describes the properties of the VHD.
Often, the fixed-format wastes space because most disks have large unused ranges in them. However, Azure stores
.vhd files in a sparse format, so you receive the benefits of both the fixed and dynamic disks at the same time. For
more information, see Getting started with virtual hard disks.
All VHD files in Azure that you want to use as a source to create disks or images are read-only, except the .vhd files
uploaded or copied to Azure storage by the user (which can be either read-write or read-only). When you create a
disk or image, Azure makes copies of the source .vhd files. These copies can be read-only or read-and-write,
depending on how you use the VHD.
When you create a virtual machine from an image, Azure creates a disk for the virtual machine that is a copy of the
source .vhd file. To protect against accidental deletion, Azure places a lease on any source .vhd file that’s used to
create an image, an operating system disk, or a data disk.
Before you can delete a source .vhd file, you’ll need to remove the lease by deleting the disk or image. To delete a
.vhd file that is being used by a virtual machine as an operating system disk, you can delete the virtual machine,
the operating system disk, and the source .vhd file all at once by deleting the virtual machine and deleting all
associated disks. However, deleting a .vhd file that’s a source for a data disk requires several steps in a set order.
First you detach the disk from the virtual machine, then delete the disk, and then delete the .vhd file.

WARNING
If you delete a source .vhd file from storage, or delete your storage account, Microsoft can't recover that data for you.

Types of disks
Azure Disks are designed for 99.999% availability. Azure Disks have consistently delivered enterprise-grade
durability, with an industry-leading ZERO% Annualized Failure Rate.
There are three performance tiers for storage that you can choose from when creating your disks -- Premium SSD
Disks, Standard SSD (Preview ), and Standard HDD Storage. Also, there are two types of disks -- unmanaged and
managed.
Standard HDD disks
Standard HDD disks are backed by HDDs, and deliver cost-effective storage. Standard HDD storage can be
replicated locally in one datacenter, or be geo-redundant with primary and secondary data centers. For more
information about storage replication, see Azure Storage replication.
For more information about using Standard HDD disks, see Standard Storage and Disks.
Standard SSD disks (preview)
Standard SSD disks are designed to address the same kind of workloads as Standard HDD disks, but offer more
consistent performance and reliability than HDD. Standard SSD disks combine elements of Premium SSD disks
and Standard HDD disks to form a cost-effective solution best suited for applications like web servers that do not
need high IOPS on disks. Where available, Standard SSD disks are the recommended deployment option for most
workloads. Standard SSD disks are only available as Managed Disks, and while in preview are only available in
select regions and with the locally redundant storage (LRS ) resiliency type.
Premium SSD disks
Premium SSD disks are backed by SSDs, and delivers high-performance, low -latency disk support for VMs
running I/O -intensive workloads. Typically you can use Premium SSD disks with sizes that include an "s" in the
series name. For example, there is the Dv3-Series and the Dsv3-series, the Dsv3-series can be used with Premium
SSD disks. For more information, please see Premium Storage.
Unmanaged disks
Unmanaged disks are the traditional type of disks that have been used by VMs. With these disks, you create your
own storage account and specify that storage account when you create the disk. Make sure you don't put too many
disks in the same storage account, because you could exceed the scalability targets of the storage account (20,000
IOPS, for example), resulting in the VMs being throttled. With unmanaged disks, you have to figure out how to
maximize the use of one or more storage accounts to get the best performance out of your VMs.
Managed disks
Managed Disks handles the storage account creation/management in the background for you, and ensures that
you do not have to worry about the scalability limits of the storage account. You simply specify the disk size and
the performance tier (Standard/Premium), and Azure creates and manages the disk for you. As you add disks or
scale the VM up and down, you don't have to worry about the storage being used.
You can also manage your custom images in one storage account per Azure region, and use them to create
hundreds of VMs in the same subscription. For more information about Managed Disks, see the Managed Disks
Overview.
We recommend that you use Azure Managed Disks for new VMs, and that you convert your previous unmanaged
disks to managed disks, to take advantage of the many features available in Managed Disks.
Disk comparison
The following table provides a comparison of Standard HDD, Standard SSD, and Premium SSD for unmanaged
and managed disks to help you decide what to use.

AZURE STANDARD SSD DISK


AZURE PREMIUM DISK (PREVIEW) AZURE STANDARD HDD DISK

Disk Type Solid State Drives (SSD) Solid State Drives (SSD) Hard Disk Drives (HDD)

Overview SSD-based high- More consistent HDD-based cost effective


performance, low-latency performance and reliability disk for infrequent access
disk support for VMs than HDD. Optimized for
running IO-intensive low-IOPS workloads
workloads or hosting
mission critical production
environment

Scenario Production and performance Web servers, lightly used Backup, Non-critical,
sensitive workloads enterprise applications and Infrequent access
Dev/Test
AZURE STANDARD SSD DISK
AZURE PREMIUM DISK (PREVIEW) AZURE STANDARD HDD DISK

Disk Size P4: 32 GiB (Managed Disks Managed Disks only: Unmanaged Disks: 1 GiB – 4
only) E10: 128 GiB TiB (4095 GiB)
P6: 64 GiB (Managed Disks E15: 256 GiB
only) E20: 512 GiB Managed Disks:
P10: 128 GiB E30: 1024 GiB S4: 32 GiB
P15: 256 GiB (Managed E40: 2048 GiB S6: 64 GiB
Disks only) E50: 4095 GiB S10: 128 GiB
P20: 512 GiB S15: 256 GiB
P30: 1024 GiB S20: 512 GiB
P40: 2048 GiB S30: 1024 GiB
P50: 4095 GiB S40: 2048 GiB
S50: 4095 GiB

Max Throughput per Disk 250 MiB/s Upto 60 MiB/s Upto 60 MiB/s

Max IOPS per Disk 7500 IOPS Upto 500 IOPS Upto 500 IOPS

One last recommendation: Use TRIM with unmanaged standard disks


If you use unmanaged standard disks (HDD ), you should enable TRIM. TRIM discards unused blocks on the disk
so you are only billed for storage that you are actually using. This can save on costs if you create large files and
then delete them.
You can run this command to check the TRIM setting. Open a command prompt on your Windows VM and type:

fsutil behavior query DisableDeleteNotify

If the command returns 0, TRIM is enabled correctly. If it returns 1, run the following command to enable TRIM:

fsutil behavior set DisableDeleteNotify 0

NOTE
Note: Trim support starts with Windows Server 2012 / Windows 8 and above, see New API allows apps to send "TRIM and
Unmap" hints to storage media.

Next steps
Attach a disk to add additional storage for your VM.
Create a snapshot.
Convert to managed disks.
About disks storage for Azure Linux VMs
7/13/2018 • 9 minutes to read • Edit Online

Just like any other computer, virtual machines in Azure use disks as a place to store an operating system,
applications, and data. All Azure virtual machines have at least two disks – a Linux operating system disk and a
temporary disk. The operating system disk is created from an image, and both the operating system disk and the
image are virtual hard disks (VHDs) stored in an Azure storage account. Virtual machines also can have one or
more data disks, that are also stored as VHDs.
In this article, we will talk about the different uses for the disks, and then discuss the different types of disks you
can create and use. This article is also available for Windows virtual machines.

NOTE
Azure has two different deployment models for creating and working with resources: Resource Manager and classic. This
article covers using both models, but Microsoft recommends that most new deployments use the Resource Manager model.

Disks used by VMs


Let's take a look at how the disks are used by the VMs.

Operating system disk


Every virtual machine has one attached operating system disk. It's registered as a SATA drive and is labeled
/dev/sda by default. This disk has a maximum capacity of 2048 gigabytes (GB ).

Temporary disk
Each VM contains a temporary disk. The temporary disk provides short-term storage for applications and
processes and is intended to only store data such as page or swap files. Data on the temporary disk may be lost
during a maintenance event or when you redeploy a VM. During a standard reboot of the VM, the data on the
temporary drive should persist. However, there are cases where the data may not persist, such as moving to a new
host. Accordingly, any data on the temp drive should not be data that is critical to the system.
On Linux virtual machines, the disk is typically /dev/sdb and is formatted and mounted to /mnt by the Azure
Linux Agent. The size of the temporary disk varies, based on the size of the virtual machine. For more information,
see Sizes for Linux virtual machines.
For more information on how Azure uses the temporary disk, see Understanding the temporary drive on
Microsoft Azure Virtual Machines

Data disk
A data disk is a VHD that's attached to a virtual machine to store application data, or other data you need to keep.
Data disks are registered as SCSI drives and are labeled with a letter that you choose. Each data disk has a
maximum capacity of 4095 GB. The size of the virtual machine determines how many data disks you can attach to
it and the type of storage you can use to host the disks.
NOTE
For more information about virtual machines capacities, see Sizes for Linux virtual machines.

Azure creates an operating system disk when you create a virtual machine from an image. If you use an image that
includes data disks, Azure also creates the data disks when it creates the virtual machine. Otherwise, you add data
disks after you create the virtual machine.
You can add data disks to a virtual machine at any time, by attaching the disk to the virtual machine. You can use
a VHD that you've uploaded or copied to your storage account, or one that Azure creates for you. Attaching a data
disk associates the VHD file with the VM, by placing a 'lease' on the VHD so it can't be deleted from storage while
it's still attached.

About VHDs
The VHDs used in Azure are .vhd files stored as page blobs in a standard or premium storage account in Azure.
For details about page blobs, see Understanding block blobs and page blobs. For details about premium storage,
see High-performance premium storage and Azure VMs.
Azure supports the fixed disk VHD format. The fixed format lays the logical disk out linearly within the file, so that
disk offset X is stored at blob offset X. A small footer at the end of the blob describes the properties of the VHD.
Often, the fixed-format wastes space because most disks have large unused ranges in them. However, Azure stores
.vhd files in a sparse format, so you receive the benefits of both the fixed and dynamic disks at the same time. For
more information, see Getting started with virtual hard disks.
All VHD files in Azure that you want to use as a source to create disks or images are read-only, except the .vhd files
uploaded or copied to Azure storage by the user (which can be either read-write or read-only). When you create a
disk or image, Azure makes copies of the source .vhd files. These copies can be read-only or read-and-write,
depending on how you use the VHD.
When you create a virtual machine from an image, Azure creates a disk for the virtual machine that is a copy of the
source .vhd file. To protect against accidental deletion, Azure places a lease on any source .vhd file that’s used to
create an image, an operating system disk, or a data disk.
Before you can delete a source .vhd file, you’ll need to remove the lease by deleting the disk or image. To delete a
.vhd file that is being used by a virtual machine as an operating system disk, you can delete the virtual machine,
the operating system disk, and the source .vhd file all at once by deleting the virtual machine and deleting all
associated disks. However, deleting a .vhd file that’s a source for a data disk requires several steps in a set order.
First you detach the disk from the virtual machine, then delete the disk, and then delete the .vhd file.

WARNING
If you delete a source .vhd file from storage, or delete your storage account, Microsoft can't recover that data for you.

Types of disks
Azure Disks are designed for 99.999% availability. Azure Disks have consistently delivered enterprise-grade
durability, with an industry-leading ZERO% Annualized Failure Rate.
There are three performance tiers for storage that you can choose from when creating your disks -- Premium SSD
Disks, Standard SSD (Preview ), and Standard HDD Storage. Also, there are two types of disks -- unmanaged and
managed.
Standard HDD disks
Standard HDD disks are backed by HDDs, and deliver cost-effective storage. Standard HDD storage can be
replicated locally in one datacenter, or be geo-redundant with primary and secondary data centers. For more
information about storage replication, see Azure Storage replication.
For more information about using Standard HDD disks, see Standard Storage and Disks.
Standard SSD disks (preview)
Standard SSD disks are designed to address the same kind of workloads as Standard HDD disks, but offer more
consistent performance and reliability than HDD. Standard SSD disks combine elements of Premium SSD disks
and Standard HDD disks to form a cost-effective solution best suited for applications like web servers that do not
need high IOPS on disks. Where available, Standard SSD disks are the recommended deployment option for most
workloads. Standard SSD disks are only available as Managed Disks, and while in preview are only available in
select regions and with the locally redundant storage (LRS ) resiliency type.
Premium SSD disks
Premium SSD disks are backed by SSDs, and delivers high-performance, low -latency disk support for VMs
running I/O -intensive workloads. Typically you can use Premium SSD disks with sizes that include an "s" in the
series name. For example, there is the Dv3-Series and the Dsv3-series, the Dsv3-series can be used with Premium
SSD disks. For more information, please see Premium Storage.
Unmanaged disks
Unmanaged disks are the traditional type of disks that have been used by VMs. With these disks, you create your
own storage account and specify that storage account when you create the disk. Make sure you don't put too many
disks in the same storage account, because you could exceed the scalability targets of the storage account (20,000
IOPS, for example), resulting in the VMs being throttled. With unmanaged disks, you have to figure out how to
maximize the use of one or more storage accounts to get the best performance out of your VMs.
Managed disks
Managed Disks handles the storage account creation/management in the background for you, and ensures that
you do not have to worry about the scalability limits of the storage account. You simply specify the disk size and
the performance tier (Standard/Premium), and Azure creates and manages the disk for you. As you add disks or
scale the VM up and down, you don't have to worry about the storage being used.
You can also manage your custom images in one storage account per Azure region, and use them to create
hundreds of VMs in the same subscription. For more information about Managed Disks, see the Managed Disks
Overview.
We recommend that you use Azure Managed Disks for new VMs, and that you convert your previous unmanaged
disks to managed disks, to take advantage of the many features available in Managed Disks.
Disk comparison
The following table provides a comparison of Standard HDD, Standard SSD, and Premium SSD for unmanaged
and managed disks to help you decide what to use.

AZURE STANDARD SSD DISK


AZURE PREMIUM DISK (PREVIEW) AZURE STANDARD HDD DISK

Disk Type Solid State Drives (SSD) Solid State Drives (SSD) Hard Disk Drives (HDD)

Overview SSD-based high- More consistent HDD-based cost effective


performance, low-latency performance and reliability disk for infrequent access
disk support for VMs than HDD. Optimized for
running IO-intensive low-IOPS workloads
workloads or hosting
mission critical production
environment
AZURE STANDARD SSD DISK
AZURE PREMIUM DISK (PREVIEW) AZURE STANDARD HDD DISK

Scenario Production and performance Web servers, lightly used Backup, Non-critical,
sensitive workloads enterprise applications and Infrequent access
Dev/Test

Disk Size P4: 32 GiB (Managed Disks Managed Disks only: Unmanaged Disks: 1 GiB – 4
only) E10: 128 GiB TiB (4095 GiB)
P6: 64 GiB (Managed Disks E15: 256 GiB
only) E20: 512 GiB Managed Disks:
P10: 128 GiB E30: 1024 GiB S4: 32 GiB
P15: 256 GiB (Managed E40: 2048 GiB S6: 64 GiB
Disks only) E50: 4095 GiB S10: 128 GiB
P20: 512 GiB S15: 256 GiB
P30: 1024 GiB S20: 512 GiB
P40: 2048 GiB S30: 1024 GiB
P50: 4095 GiB S40: 2048 GiB
S50: 4095 GiB

Max Throughput per Disk 250 MiB/s Upto 60 MiB/s Upto 60 MiB/s

Max IOPS per Disk 7500 IOPS Upto 500 IOPS Upto 500 IOPS

Troubleshooting
When adding data disks to a Linux VM, you may encounter errors if a disk does not exist at LUN 0. If you are
adding a disk manually using the azure vm disk attach-new command and you specify a LUN ( --lun ) rather than
allowing the Azure platform to determine the appropriate LUN, take care that a disk already exists / will exist at
LUN 0.
Consider the following example showing a snippet of the output from lsscsi :

[5:0:0:0] disk Msft Virtual Disk 1.0 /dev/sdc


[5:0:0:1] disk Msft Virtual Disk 1.0 /dev/sdd

The two data disks exist at LUN 0 and LUN 1 (the first column in the lsscsi output details
[host:channel:target:lun] ). Both disks should be accessbile from within the VM. If you had manually specified the
first disk to be added at LUN 1 and the second disk at LUN 2, you may not see the disks correctly from within your
VM.

NOTE
The Azure host value is 5 in these examples, but this may vary depending on the type of storage you select.

This disk behavior is not an Azure problem, but the way in which the Linux kernel follows the SCSI specifications.
When the Linux kernel scans the SCSI bus for attached devices, a device must be found at LUN 0 in order for the
system to continue scanning for additional devices. As such:
Review the output of lsscsi after adding a data disk to verify that you have a disk at LUN 0.
If your disk does not show up correctly within your VM, verify a disk exists at LUN 0.

Next steps
Attach a disk to add additional storage for your VM.
Create a snapshot.
Convert to managed disks.

You might also like