I am Joannes Vermorel, founder at Lokad. I am also an engineer from the Corps des Mines who initially graduated from the ENS.

I have been passionate about computer science, software matters and data mining for almost two decades. (RSS - ATOM)


Entries in microsoft (12)


Details on the .NET first strategy for CNTK

An extensive discussion is taking place on the CNTK project. As I am partly responsible for this discussion, I am gather some more concrete proposals for CNTK.

Correctness by design and BrainScript

My company, Lokad, has built as complex data-driven analytical solution built on .NET. Because machine learning data pipelines are hellish to debug, we seek technologies to ensure as much design correctness as possible. For example, in many programming language, a certain degree of design correctness can be obtained through strong typing. Some languages like Rust or Closure offer other kind of guarantees.

My immediate interest for BrainScript was not for the language itself, but for the degree of design correctness appears to be enforceable in BrainScript at compile time. For example, a static analysis can tell me the total number of model parameters. For example, based on this number, it would be easy to implement a rule in our continuous integration built that prevent an abusively large training task to ever go in production.

Because of the limited expressivity of BrainScript (a good thing!), many more properties can be enforced at compile time, not even starting CNTK. Compile time is important because the continuous integration server may not have access to all the required data this is required to get CNTK up and running.

Then, BrainScript is only one option to deliver this correctness by design. In .NET/C#, it would be straightforward to implement a tiny API that deliver the sample expressivity of BrainScript. The network definition would then be compiled in .NET just like Expression Trees are compiled (*). BrainScript itself could be through at a human-readable serialization format for a valid expression built through this .NET API.

(*) OK, it's not strictly C# compile time, but in practice if your CNTK-network-description-to-be-compiled is reachable through a unit test, then any failure to compile will be caught through unit tests which is good enough in practice.

In the ticket #1962, I was requesting an extension for BrainScript to be made available for Visual Studio Code because, at the time, I was incorrectly thinking that BrainScript was the core strategy for CNTK. Indeed, BrainScript is still listed as one of the Top 8 reasons to favor CNTK over TensorFlow. Then, as it appears that BrainScript is not the core strategy anymore anyway, then I don't see any particular reason for the CNTK team to invest in the BrainScript tooling. I am completely fine with that, as long as a .NET-friendly alternative is provided which share the good properties of BrainScript.

Train vs. Eval, production and versioning support

From a machine learning perspective, training is a very distinct operation from evaluation. However, as far software engineering is concerned, the two operations typically live very close. Indeed, it usually one system that collects the data, feed the data to the training logic, collect the model, distribute the model to possibly "clients", and ensure that those "clients" are capable of executing the training logic. In company like Lokad, we have complex data pipelines, and the best way to ensure that the training data are consistent with the evaluation inputs is to factorize the logic - aka have the same bits of code (C# in our case) being used to cover both use cases.

By design this implies that any machine learning toolkit that does not offer a unified support for both training and evaluation is a major friction. It's not only a lot more costly to implement, it's also very error prone, as we need to find alternative ways to ensure that the two implementations (training-side and evaluation-side) are and remain strictly consistent in the way data are feed to the deep learning toolkit on both sides. In particular, this is why Python is so painful for a .NET solution: we not only end-up spreading an alternative stack all over the place, we end-up duplicating implementations.

Then, from v1 to v2, the CNTK changed the serialization format for models. Companies may end-up significant amount of resources invested in training one particular model, thus breaking the serialization format is bad. Yet, in the same time, it would be unreasonable to freeze the model serialization format forever, because it would actually prevent many desirable improvements for CNTK.

Once again, the solution is simply .NET. In C#, implementing complex binary (de)serializer is straightforward; arguably less than 1/10th of the effort compared to C++. Thus, CNTK could adopt an approach where the C++ toolkit only supports one format - the latest; and transfer the burden of maintaining multiple (de)serializers to C#. This approch would also offer the possibility to easily translate models in the "old" formats to the new formats. Moreover, the translation could even be done at runtime in .NET/C# if performance is not concern (it's not always a concern).

The laundry list for .NET-first CNTK

In this section, I try to briefly cover the most pressing elements for a .NET-first CNTK.

Naked Nugget deployments. Nugget is the de-factor approach to deploy components in the .NET ecosystem. It's already adopted by CNTK for the C# Evaluation bindings but not for the other parts. In particular, deployements should not involve 3rd party stacks like Python (or Node.js or Java).

A network description API in .NET/C#. The important angle is: the API is declarative, and offers the possibility to ensure some degree of correctness by design. The CNTK team is not even expected to provide the tooling to ensure the correctness by design. As long the network description can be reflected in C#, the community can handle that part.

Low-level abstractions for high perf I/O. As posted at #1963, it's important to offer the possibility to efficiently stream data to CNTK. From a .NET/C# perspective, a p/invoke passing around byte arrays is good enough as long as the corresponding binary serializers are provided in .NET/C# as well.

The non-goals for a .NET-first CNTK

Alternatively, there also non-goals the first version of a .NET-first CNTK.

Fully managed implementation. For a high-performance library like CNTK, a native C++ implementation feels just fine. Many low level parts of .NET are implemented this way, like System.Numerics.

ASP.NET specifics. As long as compatibility is ensured with .NET, compatibility will be ensured for ASP.NET. I don't anything to be done specifically for ASP.NET.

Jupyter notebooks. Jupyter is cool, no question. Yet, the interactive perspective is a very Pythonic way of doing things. While more features is desirable, Jupyter does not strike me as critical. Interative C# has been around for a long time, but there is very little community traction to support it.

Visual designer for networks. Visual design is cool for teaching, but this does not strike me as a high-priority feature for the .NET ecosystem. Again, the tools you need for 2h training sessions are very different from what you need for a mission-critical business system.

Unity specifics. Unity is very cool, but what Unity needs most - as far CNTK is concerned - is a clean .NET integration for CNTK itself. The rest is a bonus.


.NET-first strategy for CNTK

CNTK is an incredible deep learning toolkit from Microsoft. Despite being little known, under the hood, the technology rivals the capabilities of TensorFlow. While originating from Microsoft, it’s unfortunate that CNTK decided steer away from the Microsoft ecosystem, actually making CNTK a less viable option than TensorFlow as far .NET is concerned.

My conclusions:

  • as a contender for the Python ecosystem, CNTK is a lost cause. TensorFlow has already won by a large margin; just like x86-64 won over IA-64.
  • unlike Python, the .NET ecosystem is still vastly underserved as deep learning is concerned, this would be a perfect spot for CNTK if CNTK opted for a .NET-first strategy.
  • by establishing CNTK as the go-to option for deep learning in .NET, CNTK would be very well positioned to become the go-to option for Java as well.

As a long time supporter of the Microsoft ecosystem, it really pains me to see otherwise excellent Microsoft technologies heading to the wall, just because their strategy make them irrelevant by design for the broader community.

At the core, CNTK is a C++ product, with a primary focus on raw performance, that is, making the most accuracy-wise of the computing resources that are allocated to machine learning task. Yet, as confirmed by the team, CNTK is focusing on Python as the high-level entry point for CNTK.

CNTK also features BrainScript, a tiny DSL (domain specific language) intended to design a deep learning network with a high-level declarative syntax. While BrainScript is advertized as a scripting language, it’s a glorified configuration file; which is an excellent option in the deep learning context.

A frontal assault on TensorFlow is a lost cause

The Python-first orientation of CNTK is a strategic mistake for CNTK, and will only consolidate CNTK as a distant second behind TensorFlow.

The Python deep-learning ecosystem is already very well-served through TensorFlow and its own ecosystem. As a quick guestimation, TensorFlow has presently 100x the momentum of CNTK. Amazon tells me that there are 50+ books on TensorFlow against exactly zero books for CNTK.

Can CNTK catch-up frontally against TensorFlow? No. TensorFlow has the first-mover advantage and my own casual observations of HackerNews indicates that TensorFlow is even growing faster than CNTK, further widening the gap. CNTK might have better performance, but it’s not a game changer, not a sustainable game changer anyway. The TensorFlow teams are strong, and the CNTK performance tuning is being aggressively replicated.

Anecdotally, the Microsoft teams themselves seem to be internally favoring TensorFlow over CNTK. TensorFlow is already more mature for the Microsoft ecosystem, i.e. .NET, than CNTK.

Business 101: don’t engage frontally is a battle that is already lost. You can be ambitious, but you need an "angle".

Why Python is a lost cause for Microsoft

Microsoft has tried to reach out to the Python ecosystem for more than a decade; the efforts dating back from IronPython in 2006, followed by the Visual Studio tools in 2011. Yet, at present time, after a decade of efforts, the fraction of the Python ecosystem successfully attracted by Microsoft remains negligible: let's say it underflows measurements. I fail to see why CNTK would be any different.

The Python ecosystem has consolidated itself around strictly non-Microsoft technologies and non-Microsoft environments. There is no SQL Server, it’s PostgreSQL. There is no Microsoft Azure, it’s AWS or Google Cloud. Etc. If I were to bet some money, I would gamble that CNTK won’t have any meaningful presence in the Python machine learn ecosystem of tomorrow. Most likely, it will be TensorFlow and a couple of non-Microsoft contenders (*).

(*) I am not saying that no strong contenders for TensorFlow will emerge from the deep learning community; I am saying is that no strong contenders for TensorFlow targeting Python will emerge from Microsoft.

Python is a major friction for a .NET solution

One deep yet frequent misunderstanding from academic or research circles is the degree of pain that heterogeneous software stacks represent for companies, clients and vendors alike. Maintaining healthy production systems when one stack (e.g. .NET or Java or Python) is involved is already a challenge. Introducing a second stack is very costly in practice.

My company Lokad is developing a complex .NET solution based on Microsoft Azure. If tomorrow Lokad were to start relying on Python, we would have:

  • to monitor closely all the Python packages and dependencies, just like we do for .NET packages, if only to be capable of swiftly deploying security fixes.
  • to install and maintain consistent Python versions across all our machines, from the development workstations to production servers, and organize company-wide (*) upgrades accordingly.
  • to develop and foster a culture of Python, includes knowing the language (easy) but also all the institutional knowledge to build good Python apps (hard).

(*) Sometimes you're out of lucks and bits of your software live on the client side too, forcing you into multi-version supports of the base framework itself.

Keeping the technological mass of a software solution under control is a very important concern. LaTeX might have succeeded despite being built out of a dozen of programming languages, but this will kill any independent software vendor (ISV) with a high degree of certainty.

All those considerations have nothing to do whether Python is good or bad. The challenge would be identical if we were to introduce the Java or Swift stacks in our .NET codebase.

The tech mass of BrainScript is minimal

While Python is whole technology stack of its own, BrainScript, the DSL of CNTK is nothing but a glorified configuration file. As far the technological mass is concerned, managing a DSL like BrainScript, is a no-brainer. Let’s face it, this is not a new language to learn. The configuration file for ASP.NET (web.config) is an order of magnitude more complex than BrainScript as a whole, and nobody refers to web.config files as being a programming language of their own.

While the CNTK team decided to steer away from BrainScript, I would, on the contrary suggest to double-down on this approach. Machine learning is complicated, bugs are hard to track down, and unlike machine learning competitions where datasets are 100% well-prepared, data in the real world is messy and poorly documented. Real software businesses are facing deadlines and limited budgets. We need machine learning tools, but more importantly, we need tools that deliver some degree of correctness by design. BrainScript is imperfect, but it’s a solid step in the right direction.

.NET is a massive opportunity for CNTK

The .NET ecosystem is vast, arguably significantly larger that the one of Python, and yet fully underserved as far deep learning is concerned.

It is a misconception to think that .NET software solutions designed with .NET would benefit less from deep learning than Python software solutions. The needs for deep learning and the expected benefits are the same. Python just happens to be much more popular in the publishing community than .NET.

Most .NET solution vendors, like Lokad, would immediately jump on CNTK if .NET was given a strong clear priority. Indeed, a .NET-first perspective would be a game changer for CNTK in this ecosystem. Instead, of struggling with second-class citizens, like the current C# bindings of CNTK, we would benefit from first-class citizens, which would happen to be completely aligned with the rest of the .NET ecosystem.

Also, .NET is very close to Java - at least, Java is much closer to .NET than it is from Python. Establishing CNTK as a deep learning leader in the .NET ecosystem would also make a very strong case to reach out to the Java ecosystem later on.

.NET/C# is superior to Python for deep learning

Caveat: opinionated section

C# is nearly uniformly superior to Python: performance, productivity, tooling.

Performance is a primary concern for the platform intended as the high-level instrumentation of the deep learning infrastructure. Indeed, one should never underestimate how much development efforts the data pipeline represent. It might be possible to describe a deep learning network in 50 lines of codes, yet, in practice, the data pipeline that surrounds those lines is weighing thousands of lines. Moreover, because we are moving a lot of data around, and because preprocessing the data can be quite challenging on its own, the data pipeline needs to be efficient.

Anecdote: at Lokad, we have multiple data pipelines that involve crunching over 1TB of data on a daily basis. We use highly optimized C# to run those data pipelines that make aggressive use of both async capabilities of C# but also of low level C-like algorithms.

With .NET/C#, my own experience at building Lokad, indicates that it’s usually possible to achieve over 50% of the raw C performance on CPU by paying minimal attention to performance - aka don’t go crazy with objects, use struct when relevant, etc. With CPython, achieving even 10% of the C performance is a struggle, and frequently we end-up with 1% of the performance of C. Yes, PyPy exists, but then, the Python ecosystem is badly fragmented, and PyPy is not even compatible with TensorFlow. When it comes to machine learning, raw performance of the high-level interop language matters a lot, because it’s where all the data preprocessing happen, that is where 99% of the software investments are made. Falling back to C++ whenever you need performance is no more a reasonable option in 2017.

.NET/C# is a superior alternative to Python to built type-safe high-performance production-grade data pipelines. Moreover, I would argue, another debatable point, that C# as a language, is also evolving faster than Python.

Finally, .NET Core being open source and now working both on Linux and Windows, there is no more limitations not to use .NET/C# as the middleware glue for CNTK either. Again, this would contribute in making make CNTK easier easy to integrate into .NET solutions which represent the strategic market that Microsoft has a solid chance to capture.


Meeting Eric Rudder, Senior Vice-President, Microsoft

Yesterday, I had the chance to meet Eric Rudder, Senior Vice President at Microsoft for nearly 1h30, along with three of my students who actively contributed to the Sqwarea project, an open source C# game designed for Windows Azure.

Eric proposed internships at MS Research after about 45min of discussion (really nice since CS students are expected to make a research internship in the US in their 2nd year at the ENS).

(From left to right: Joannes Vermorel, Ludovic Patey, Robin Morisset, Eric Rudder, Fabrice Ben Hamouda)

Special thanks to Thomas Serval and Pierre-Louis Xech who made this meeting possible in the first place.


Thinking an academic package for Azure

This year, this is the 4th time that I am a teaching Software Engineering and Distributed Computing at the ENS. The classroom project of 2010 is based on Windows Azure, like the one of 2009.

Today, I have been kindly asked by Microsoft folks to suggest an academic package for Windows Azure, to help those like me who wants to get their students started with cloud computing in general, and Windows Azure in particular.

Thus, I decided to post here my early thoughts on that one. Keep in mind I have strictly no power whatsoever on Microsoft strategy. I am merely publishing here some thoughts in the hope of getting extra community feedback.

I believe that Windows Azure, with its strong emphasis on tooling experience, is a very well suited platform for introducing cloud computing to Computer Science students.

Unfortunately, getting an extra budget allocated for the cloud computing course in the Computer Science Department of your local university is typically complicated. Indeed, very few CS courses require a specific budget for experiments. Furthermore, the pay-as-you-go pricing of cloud computing goes against nearly every single budgeting rule that universities have in place - at least for France - but I would guess similar policies in other countries. Administrations tend to be wary of “elastic” budgeting, and IMHO, rightfully so. This is not a criticism, merely an observation.

In my short 2-years cloud teaching experience, a nice academic package sponsored by Microsoft has tremendously simplified my situation to deliver the “experimental” part of my cloud computing course.

Obviously, unlike software licenses, offering cloud resources costs very real money. In order to make the most of resources allocated to academics, I would suggest narrowing the offer down to the students who are the most likely to have an impact on the software industry in the next decade.

The following conditions could be applied to the offer:

  1. Course is setup by a Computer Science Department.
  2. It focuses on cloud computing and/or software engineering.
  3. Hands-on, project-oriented teaching.
  4. Small classroom, 30 students or less.

Obviously there are plenty of situations where cloud computing would make sense, and not fit into these constraints such as bioinformatics class with data crunching project or large audience courses with +100 students … but the package that I am proposing below is unlikely to match the needs of those situations anyway.

For the academic package itself, I would suggest:

  1. 1 book per student on Visual Studio 20XY (replace XY by the latest edition).
  2. 4 months of Azure hosting with:
    • 4 small VMs
    • 30 storage accounts, cumulative storage limited to 50GB.
    • 4 small SQL instances of 1GB.
    • Cumulative bandwidth limited to 5GB per day.

Although, it might be surprising in this day and age of cloud computing, I have found that artifacts made out of dead trees tend to be the most valuable ingredient for a good cloud computing course, especially those +1000 pages ones about the latest versions of Visual Studio / .NET (future editions might even include a chapter or two about Windows Azure which would be really nice).

Indeed, in order to tackle the cloud, students must first overcome difficulties posed by their programming environments. One can argue that everything can be found on the web. That’s true, but there is so much information online about .NET and Visual Studio, and that students get lost and lose their motivation if they have to browse through an endless flow of information.

Furthermore, I have found that teaching basics of C# or .NET in a Computer Science is a bad idea. First, it's like an attempt to kill students out of sheer boredom. Just imagine yourself listening for 3h straight at someone enumerating keywords of a programming language. Second, you have little or no control on the background of your students. Some might be Java or C++ gurus already; while some might have never heard of OO programming.

With the book on hand, I suggest to simply ask students to read a couple of chapters from one week to the next, and to interrogate them on their reading at the beginning of each session.

Then, concerning the Windows Azure package itself, I suggest 4 months worth of CPU as it should fit for most courses. If the course spread longer than 4 months then I would suggest students to start optimizing their app not to use all the 4 VMs all the time.

4 VMs seems just enough to feel both the power and the pain of scaling out. It brings a handy 4x speed-up if the app is well designed, but represents a world of pain if the app does not correctly handle concurrency.

Same idea applies to SQL instances. Offering a single 10GB instance would make things easier, but  course should be focused on scaling out, not scaling up. Thus, there is no reason to make things easier here.

In practice, I have found that offering individual storage accounts simplifies experiments, although there is little support for offering either lot of storage or lot of bandwidth.

In total, the package would represent a value of roughly $2500 (assuming $30 per book), and, from a different angle, about $100 per student. Not cheap, but attracting talented students seems worth a worthy (albeit long-term) investment.

1.     Focus on cloud computing and/or software engineering.


Big Wish List for Windows Azure

At Lokad, we have been working with Windows Azure for more than 1 year now. Although Microsoft is a late entrant in the cloud computing arena, So far, I am extremely satisfied with this choice as Microsoft is definitively moving forward in the right direction.

Here is my Big Wish List for Windows Azure. It's the features that would turn Azure into a killer product, deserving a lion-sized market share in the cloud computing marketplace.

My wishes are ordered by component:

  • Windows Azure
  • SQL Azure
  • Table Storage
  • Queue Storage
  • Blob Storage
  • Windows Azure Console
  • New services

Windows Azure

Top priority:

  • Faster CPU burst: The total time between initial VM request (through the Azure Console or the Management API), and the start of the client code execution is long, typically 20min, and in my (limited) experience above 1h for any larger number of VMs (say +10 VMs). Obviously, we are nowhere near real-time elastic scalability. In comparison, SQL Azure needs no more than a few seconds to instantiate a new DB. I would really like to see such an excellent behavior on the Windows Azure side.
  • Smaller VMs: for now, the smallest VMs are 2GB large and costs $90/month, which brings the cost of a modest web app to 200 USD/month (considering a web role and a worker role). Competitors (such as Rackspace) are already offering much smaller VMs, down to 256MB per instance priced about 10x cheaper. I would really like to see that on Azure as well. Otherwise, scaled down apps are just not possible.
  • Per minute charge: for now Azure is charging by the hour, which means that any hour that your start consuming will be charged fully. Obviously, it would be a great incentive to improve performance to start charging by the minute, so that developers could really fine tune their cloud usage to meat the demand without wasting resources. Obviously, such a feature makes little sense if your VMs take 1h to get started.
  • Per-VM termination control: Currently, it is not possible to tell the Azure Fabric which VM should be terminated; which is rather annoying. For example, long running computations can be interrupted at any moment (they will have to be performed again) while idle VMs might be kept alive.
  • Bandwidth and storage quota: most apps are never supposed to be require truckloads of bandwidth or storage. If they do, it just means that something is going really wrong. Think of a loop endlessly pooling some data from a remote data source. With pay-as-go, a single VM can easily generates 10x its own monthly costs through a faulty behavior. To prevent such situations, it would be much nicer to assign quota for roles.

 Nice to have:

  • Instance count management through RoleEnvironment: The .NET class RoleEnvironment provides a basic access to the properties of the current Azure instance. It would be really nice to provide here a native .NET access to instance termination (as outlined here above), and instance allocation requests - considering that each role should be handling its own scalability.
  • Geo-relocation of services: Currently, the geolocation of a service is set at setup-time, and cannot be changed afterward. Yet, the default location is "Asia" (that's the first item of the list), which makes the process quite error-prone (any manual process should be considered as error-prone anyway). It would nicer if it was possible to relocate a service - eventually with a limited downtime, as it's only a corrective measure, not a production imperative.

SQL Azure

Top priority:

  • DB snapshot & restore toward the Blob Storage: even if the cloud is perfectly reliable, cloud app developers are not. The data of a cloud app (like any other app btw) can be corrupted by a faulty app behavior. Hence, frequent snapshots should be taken to make sure that data could be restored after a critical failure. The ideal solution for SQL Azure would be to dump DB instances directly into the Blob Storage. Since DB instances are kept small (10GB max), SQL Azure would be really nicely suited for this sort of behavior.
  • Smaller VM (starting at 100MB for $1 / month): 100MB is already a lot of data. SQL Azure is a very powerful tool to support scaled-down approaches, eventually isolating the data of every single customer (in case of a multi-tenant app) into an isolated DB. At $10/month, the overhead is typically too large to go for a such a strong isolation; but at $1/month, it would become the de-facto pattern; leading to smaller and more maintainable DB instances (as opposed to desperately trying to scale up monolithic SQL instances).
  • Size auto-migration: Currently, a 1GB DB cannot be upgraded as 10GB instances. The data has to be manually copied first, and the original DB deleted later on (and vice-versa, the other way around). It would be much nicer if SQL Azure was taking care of auto-scaling up or down the size of the DB instances (within the 10GB limit obviously).

Nice to have:

  • Geo-relocation of service: Same like above. Downtime is OK too, just a corrective measure.

Table Storage

Top priority:

  • REST level .NET client library: at present time, Table Storage can only be accessed though an ADO.NET implementation that proves to be rather troublesome. ADO.NET feels in the way if you really want to get the most of Table Storage. Instead, it would be much nicer if a .NET wrapper around the REST API was provided as low-level access.

Nice to have:

  • Secondary indexes: this one has already been announced; but I am re-posting it here as it would be a really nice feature nonetheless. In particular, this would be very handy to reduce the number of I/O operations in many situations.

Queue Storage

Nice to have:

  • Push multiple messages at once: the Queue offers the possibility to dequeue multiple messages at once; but messages can only be queued one by one. Symmetrizing the queue behavior by offering batch writes too would be really nice.

Blob Storage

Nice to have:

  • Reverse Blob enumeration: prefixed Blob enumeration is probably one of the most powerful of the Blob Storage. Yet, items can only be enumerated in increasing order against their respective blob names. Yet, in many situation the "canonical" order is exactly the opposite of what you want (ex: retrieve blob names prefixed by dates, starting by the most recent ones). It would be really nice if it was to possible to enumerate the other way around too.

Windows Azure Console

The Windows Azure Console is probably the weakest component of Windows Azure. In many ways, it's a real shame to see such a good piece of technology so much dragged down by the abysmal usability of its administrative web client.

Top priority:

  • 100x speed-up: when I say 100x, I really mean it; and even with 100x factor, it will still be rather slow by most web standards, as refresh latency of 20min is not uncommon after updating the configuration of a role.
  • Basic multi-user admin features: for now, the console is a mono-user app which is quite a pain in any enterprise environment (what happens when Joe, the sysadmin, goes in vacations?). It would much nicer if several Live ID could be granted access rights to an Azure project.
  • Billing is a mess, really: beside the fact that about 10 counter-intuitive clicks are required to navigate from the console toward your consumption records; the consumption reporting is still of substandard quality. Billing cries for massive look & feel improvements.

Nice to have:

  • Project rename: once named, projects cannot be renamed. This situation is rather annoying as there are many situations that would call for a naming correction. At present time, if you are not satisfied with your project name, you've got no choice but to reopen an Azure account; and starts all over again.
  • Better handling of large projects: the design of the console is OK if you happen to have a few services to manage, but beyond 10 services, the design starts being messy. Clearly, the console has not been designed to handle dozens of services. It would be way nicer to have a compact tabular display to deal with the service list.
  • Aggregated dashboard: Azure services are spread among many panels. With the introduction of new services (Dallas, ...), getting a big picture of your cloud resources is getting more and more complex. Hence, it would be really nice to have a dashboard aggregating all resources being used by your services.
  • OpenID access: Live ID is nice, but OpenID is nice too. OpenID is taking momentum, I would be really nice to see Microsoft supporting OpenID here. Note that there is no issue to support LiveID and OpenID side by side.

New services

Finally, there are a couple of new services that I would be thrilled to see featured by Windows Azure:

  • .NET Role Profiler: in a cloud environment, optimizing has a very tangible ROI, as each performance gain will be reflected through a lower consumption bill. Hence, a .NET profiler would be a killing service for cloud apps based on .NET. Even better, low overhead sampling profilers could be used to collect data even for systems in production.
  • Map Reduce: already featured by Amazon WS, it's such a massively useful for the rest of us (like Lokad) who perform intensive computations on the cloud. Microsoft has already been moving forward with DryadLinq in this direction, but I am eager to see how Azure will be impacted.


This is a rather long list already. Did I forget anything? Just let me know.