Storage
-
make it possible to use SSL on blob storage using custom domains
Currently you can use SSL but you have to user the standard URL. You can create a CNAME to your storage account but most browsers complain that the traffic was rerouted and is possibly an attack. There should be a way to install a domain certificate to your containers.
2,670 votesOur apologies for not updating this ask earlier. SSL support for Blob Storage custom domain names is an important feature that is toward the front of our backlog. As soon as we have progress to share, we will do so. We will continue to provide updates at least once per quarter.
-
Add TableStorage LINQ query support for Select, Count and Contains
Critical functions such as Select, Count and Contains are not currently available when querying TableStorage data.
If I only want a total number of rows that match a certain criteria I have no choice but to retrieve the data, count it and throw it away.
Adding support for Select would also help with heavy queries by only returning the data selected from a query.
Contains would be useful for searching. Using the Compare function is annoying.
1,515 votesThis work is on our backlog and currently under consideration. We will update here if this changes.
-
Support secondary Indexes
Need to be able to sort on something other than the rowkey
1,211 votesWe understand this is a top customer ask and as such it is currently on our backlog to be prioritized. We will update when the status changes.
-
Provide me with full text search on table storage
Does what it says on the tin....really need better search capabilities over azure table storage.
899 votesA common need for users of Azure Table Storage is searching data in a Table using query patterns other than those that Table Storage provides efficiently, namely key lookups and partition scans. Using Azure Search, you can index and search Table Storage data (using full text search, filters, facets, custom scoring, etc.) and capture incremental changes in the data on a schedule, all without writing any code. To learn more, check out Indexing Azure Table Storage with Azure Search: https://docs.microsoft.com/en-us/azure/search/search-howto-indexing-azure-tables
-
Azure Control Panel functionality for snapshots or incremental backup
I know that data is replicated but that doesn't protect against logical errors, accidental removal by admin or hacker screwing up the data.
So I really would like to see this done natively from Azure instead of building my own backup systems like these:
http://blogs.msdn.com/b/windowsazurestorage/archive/2010/04/30/protecting-your-blobs-against-application-errors.aspx834 votesOur apologies for not updating this ask earlier. We view this ask as a set of related features. Soft-Delete for Blobs will address data loss as a result of logical errors. It will provide functionality to automatically generate and retain snapshots each time data is deleted or overwritten. This feature is at the front of our backlog. Object Versioning for Blobs will also address data loss as a result of logical errors. It will provide an interface to store and access multiple versions of the same object. This feature is also on our backlog, and depends on Soft-Delete. Write Once Read Many (WORM) protection for Blobs will disable object modification (by anyone, including administrators and hackers) for a specified period of time. This feature is also high on our backlog. We will be delivering this ask in phases. As soon as we have progress to share, we will…
-
Support a default blob for Blob storage containers
Blob storage is, in effect, a big massive scale static web server. THis makes it very suitale for directly serving certain types of static web sites that may be hit by a big spike in load. The one statis web server feature missing that is really hard to work around is a lack of a default document.
It should be possible to set a single blob in each container as the default blob and this should be returned if the HTTP GET request provides only the container name.
This would allow
http://myservice.blob.core.windows.net/
and
http://myservice.blob.core.windows.net/mycontainer/767 votesOur apologies for not updating this ask earlier. We think of this ask as part of the Static Websites feature. This feature is on our backlog, and it depends on SSL support for Blob Storage custom domain names. You can track progress on that feature here: https://feedback.azure.com/forums/217298/suggestions/3007732. As soon as we have updates to share on Static Websites, we will do so.
-
Allow to upload data to Azure blobs by a classic FTP client
Think about the many customers that just want to upload lots of data to a cheap storage somewhere in the cloud by using FTP upload/download.
761 votes -
713 votes
Azure Storage now provides a comprehensive set of security capabilities which together enable developers to build secure applications. Data can be secured in transit between an application and Azure by using Client-Side Encryption https://azure.microsoft.com/en-us/documentation/articles/storage-client-side-encryption/, HTTPs, or SMB 3.0. Storage Service Encryption https://azure.microsoft.com/en-us/documentation/articles/storage-service-encryption/ provides encryption at rest, handling encryption, decryption, and key management in a totally transparent fashion.
-
Add ability to view Azure Table size/entity count (rows)
Hi Mike,
I've created this idea as suggested on the forums: http://social.msdn.microsoft.com/Forums/en/windowsazure/thread/ea18ae29-36a3-42c6-8420-877216efbd42
One of the big challenges in adopting azure table storage over traditional SQL storage is the ability to know how much data is stored and how it is being used.
Being able to break size/rows down by partition would be invaluable when trying to modify / optimize Partition & Row keys. (Given data doesn't always grow as we would expect, and new bottlenecks can & will emerge).
In addition the ability to view the usage data per table / partition would be fantastic.
Obviously there are ways of…
612 votesWe understand this is a top customer ask and as such it is currently on our backlog to be prioritized. We will update when the status changes.
-
Remove storage account quota per subscription
Our application uses multi-tenant architecture. To provide better service/security to our customers we use one storage account per customer. Quota in 50 storage accounts per subscription makes difficulties to support such schema, because you need always to monitor the used number of accounts, ask support team to extend quota, in case when it is possible, if not then create new subscription for new ones.
602 votesThanks for the request! Although there still is a storage account quota per subscription, we have increased it from 50 to 100 for users who need more accounts. Please refer to the Azure Subscription and Service Limits, Quotas, and Constraints page for some of the most common Microsoft Azure limits (including the maximum limit for storage accounts per subscription).
http://azure.microsoft.com/en-us/documentation/articles/azure-subscription-service-limits/
-
Provide Time to live feature for Blobs
If I need to provide a user (or external system) some data (blob) which might be outcome of some processing (or other) and it has some expiration time I'd like to just put a new blob and set TTL property with TimeSpan (or set absolute DateTime). When the period is over my blob is deleted. So I don't have to pay for it and don't need to spin up some service for doing it myself.
522 votesOur apologies for not updating this ask earlier. Allowing users to define object expiration policies on Blobs is planned for the coming year. As soon as we have progress to share, we will do so. We will continue to provide updates at least once per quarter.
-
Copy complete AzureFiles share
Using XDrive we can make a copy or backup from a VHD using blob capabilities instantly - in the same storage account, the same datacenter or a different datacenter.
Moving from XDrive to AzureFiles we are losing this capability. The VHDs may contain thousands of files. Copying those from an XDrive into a AzureFiles share or from one share to another takes a lot of time. Even when using AzCopy.
It would be great if there was a possibility to create copies of whole shares the same way.
361 votesThank you for your feedback. We already have Azure Files Share Snapshot in our backlog to address this scenario. We do not have a specific timeline to share yet.
-
Extend Windows Explorer so that it integrates with Windows Azure Storage
Why pay for 3rd party tools when we could use a built-in Windows application? Windows Explorer is the most natural way to utilize the storage services of Windows Azure. Drag and drop files to copy them between the local file system and blob storage; use a custom view to display/edit tabular data in table storage and messages in queue storage.
338 votes -
Allow Windows Azure Storage to filter the connection clients by their network info such as IP
Azure storage is protected using a connection string which contains credentials. Usually this connection string is placed in a configuration file. Reality tells us that information placed in configuration string leaks. It is likely that the storage configuration string will leak to non-authorized people. If someone has the connection string he owns your Azure storage!
It is not possible to compare Azure Storage connection string to SQL connection strings because SQL is not exposed to the web. Even if a bad guy has the connection string he cannot use it unless he get network access to the SQL server or…331 votes -
Ability to delete from Table Storage by Partition
There is no good way to delete multiple entries in Table Storage. All you can do is delete one at a time. For logging tables this can become VERY expensive. It would be great if we had the ability to use Partitions as a way to delete logical groups of data in Table Storage in a single transaction.
This would allow for a partitioning scheme for grouping data in units that can easily be deleted. For example, logging data in WADLogsTable or rolling tables of data captured on a given partition can be archived easily and cleaned up.
326 votesWe understand this is a top customer ask and as such it is currently on our backlog to be prioritized. We will update when the status changes.
-
ACL's for AzureFiles
I've started experimenting with Azure Files. One of the features I'm lacking is the fact that you cannot give access to Folders/Files on AzureFiles based on Active Directory credentials. If you setup a typical fileshare one would like to be able to grant/revoke access to folders and files based on information of users in AD.
291 votes -
Rename blobs without needing to copy them
Copying blobs in order to rename them is a heavy operation, especially when the blob is big, or when you need to change many files, such as when changing a directory name.
257 votesRenaming Blobs is on our backlog, but is unlikely to be released in the coming year. Right now, you can use the Azure Files service to address Azure Storage like a network share using the SMB2.1 protocol. This enables using normal Windows API’s to rename files and directories. You can read more about the Files service on our blog http://blogs.msdn.com/b/windowsazurestorage/archive/2014/05/12/introducing-microsoft-azure-file-service.aspx.
-
Ability to upload a file to windows azure blobs directly via Azure management portal
I understand that there are third party tools out there that can be used. Also Data Transfer tool (SQL Azure Labs) can be used - but it would be great if we could upload a file to Windows Azure blobs directly via Azure management portal
242 votesWe are looking at ways to enhance the storage experience in the portal.
-
Custom 404 pages for missing blobs
Right now the system sends xml to the user if a file isn't found. This isn't very friendly to someone browsing from the web. I would like to have a redirect to a custom url for 404 errors instead of xml.
I can see the need for the xml when using the API so perhaps the redirect can be turned off when making the API calls.
This is also important so I can track when files are missing.
241 votesWe think of this ask as part of the Static Websites feature. You can track updates to that feature here: https://feedback.azure.com/forums/217298/suggestions/1180039.
-
Allow user-based access to Blob Containers (for support employees)
For auditing purposes and to prevent data corruption, we want to give our support employees a user-centric, read-only access to Blob Containers in order to be able to investigate possible data corruptions (caused by bugs in systems).
This is not possible now because the security architecture of Blob Service does not even know the concept of users or roles.
SAS is not secure enough mechanism because it gives access to anyone by just sharing a link + you can't track who's actually using it.
207 votesOur apologies for not updating this ask earlier. Azure Active Directory integration with Blob Storage will address this ask, and is toward the front of our backlog. As soon as we have progress to share, we will do so. We will continue to provide updates at least once per quarter.
- Don't see your idea?
