Thursday, November 06, 2008

L1v3 M35H L337 H4x0rZ

In case you aren’t already persuaded that the Live Mesh team are a bunch of L337 H4x0rZ, check out the ids of entries in the Profiles feed.

id title
G3N3RaL GeneralProfile
480u7Y0U AboutYouProfile
k0n74C7Inf0 ContactProfile
wORK1nfo WorkProfile
1n7eRE572 InterestsProfile

 

I believe this also demonstrates their far-reaching commitment to open web standards and the new generation of social apps.  Or perhaps the hidden message is “so easy, even script kiddies can hack it!”

You can see this for yourself by firing up the Live Framework Resource Browser (LivefxResourceBrowser.exe from the SDK tools) and drilling down into Cloud LOE > Profiles.

On a slightly related note, as I was digging around with the Resource Browser I discovered that the following two URL styles appear to be interchangeable.

https://user-ctp.windows.net/V0.1/cid-1234567890123456789/Profiles

https://user-ctp.windows.net/V0.1/email-abc@live.com/Profiles

I’m not sure if the email identifier format is stable enough to bank on, but it’s convenient for typing or tweaking URLs by hand.  Does anyone know of other equivalent identifier types in Mesh resources?

Wednesday, November 05, 2008

Live Mesh Resource Script Demo

In the Live Framework Programming Model Architecture and Insights session, Ori Amiga (standing in for Dharma Shukla, previously a WF architect) demos a Live Mesh resource script that runs in the cloud.  The script creates a folder on the Live Mesh desktop and downloads two images from external resources, placing them in the newly created folder.

I couldn’t find this sample on the web, so I recreated it from the session video.  You can download the demo project here.

You may need to touch up the references to Microsoft.LiveFX.Client.dll, Microsoft.LiveFX.ResourceModel.dll, and Microsoft.Web.dll since they live under C:\Program Files (x86)\ on my 64-bit box and are probably under C:\Program Files\ if you’re running 32-bit.

At first, my demo threw an error trying to run the following line:

script.RunAtServer(creds);

After investigating with Reflector, I discovered that RunAtServer() hard-codes a default script URL of https://user.windows.net/V0.1/Script/ which needs to be changed to https://user-ctp.windows.net/V0.1/Script/ .  You can override this either by calling an overload of RunAtServer() that takes a URI, or by creating an App.config file and adding the following line in the <appSettings> section.

<add key="ScriptUrl" value="https://user-ctp.windows.net/V0.1/Script/"/>

I chose to use the appSettings solution since that is what Ori must have used in the demo.

I really would prefer ScriptUrl to be exposed as a property on ResourceScript<> that appSettings/ScriptUrl maps into rather than having to specify the URL either in config or in each method call.  My philosophy is that you should always be able to do in code what you can do in config.

I’m looking forward to playing more with Live Mesh resource scripts, documented here.  Right now they feel a bit convoluted to create programmatically, but they appear to be designed to put a friendlier layer on top such as an Oslo DSL or a “resource workflow designer”.

Tuesday, November 04, 2008

Dissecting Live Mesh App Packages

After bundling Flash inside a Mesh app, I took a closer look at what Visual Studio is doing behind the scenes.  The Mesh-enabled Web Application template creates a project with a .meshproj extension.  A .meshproj file has several important properties.  <OutputName> is the prefix used to name the resulting zip file.  <DebuggerCodeType> is set to either JavaScript or Silverlight, depending on whether you create an HTML/JavaScript or a Silverlight Mesh app.  <ApplicationUri> is the Application Self-Link that you are supposed to copy-and-paste from the Developer Portal after you upload the zip file, as instructed below:

MeshAppSelfLinkPrompt

A .meshproj file also imports $(MSBuildExtensionsPath)\Microsoft\Live Framework\v1.0\Microsoft.LiveFramework.targets which first ensures that your project has an index.html and then zips up the output directory, naming the zip file using the OutputName you specified.

You don’t need to use Visual Studio to do this.  You can easily create your own Mesh app package by hand.  At a minimum, your zip file must contain:

  • index.html
  • Manifest.xml
  • Logo.png

Index.html is the entry point for your app.  Logo.png is the icon that will be displayed on your desktop and should be a 32-bit 256 x 256 png.  Manifest.xml is your app manifest. A detailed description of the manifest configuration options is documented here.

I believe you can bundle anything you want in the zip file, although Microsoft supposedly runs an antivirus scan on the contents, and there may be additional checks for inappropriate content.  Anything in the zip file gets downloaded to your computer when the app is installed on your local desktop.  This is why my Flash app was able to run offline.

You might be able to use this to “install” an XBAP application that can run offline.  To make it cross-platform, you could bundle the XBAP together with a “down-level” Silverlight version and choose which one to display based on what the client supports.  If download size is a concern, it might be possible to put the executables in a DataFeed instead of in the app zip file and selectively sync only the version you want to display, but I haven’t dug into DataFeeds enough yet to see if this kind of per-client sync filtering is possible.  Of course you would be working against the built-in versioning management if you did this (updates should only occur when the user closes and re-opens the app).

Ok, so uploading a zip file sounds nice and simple, right?  Then why does Visual Studio want me to copy-and-paste the Application Self-Link URI?  It turns out that if you use Visual Studio you only upload the zip file once per app.  Once you’ve uploaded the zip and told Visual Studio about the Self-Link URI, Visual Studio will use that URI for subsequent deployments to upload the individual files directly.

If you watch Visual Studio using Fiddler (you’ll need to configure HTTPS support) you will see it query the Mesh for the resource feeds of your app, do HTTP DELETEs for each resource that was inside your zip file, and then do a bunch of POSTs to upload each item in your project.  That seems a bit risky.  What if Visual Studio dies before reposting all the resources it deleted?  It seems like updating an app by manually uploading a zip file is a safer, slightly more atomic operation.  It’s no big deal right now, but once real production apps are being upgraded, something more robust would be nice.  I’m guessing we will see more explicit versioning, giving the user the choice of whether or not to upgrade.  If such a feature is added, the direct app resource update trick might be useful for bypassing an explicit upgrade prompt.

The next time your Live Mesh client (MOE.exe) talks to the cloud, it will download the new versions of the files into your local app cache (AppData\Local\Microsoft\Live Framework Client\Bin\Moe2\MR\).  For some reason I was unable to pinpoint the download traffic with Fiddler, so I can’t say for certain whether individual files are downloaded or if they are zipped up first.  It appears older versions of files aren’t removed.  This is probably to support the explicit user upgrade scenario in the future, but it seems like they could still be doing more cleanup.

I’m really curious why Visual Studio updates individual app resources rather than following the documented workflow of uploading a zip file with the updates.  Anyone know?

Update: I posed this question in the comments on Danny Thorpe’s blog and he responded:

On your second question, the reason we upload files individually instead of uploading the zip file is because the REST API we’re uploading to doesn’t handle zip files. The dev portal that you manually upload your zip file to unzips the file and uploads the individual bits to the production storage. The Live Services REST APIs that the VS tools use to upload files goes (as far as I know) straight into the production storage.

In a nutshell, the dev portal that you see in your web browser is just a front end to the actual cloud service. VS doesn’t upload to the dev portal UI, it uploads to the cloud itself.

Keep in mind that the long side trip of manual steps that you currently have to go through to get a new app created and uploaded to the cloud will all be going away as soon as the cloud APIs to create and provision a new application are implemented.

He also explains the debugging versioning scheme in the comments, and I suggest you go read it for more great details.

One other related insight from Danny comes from this forum thread:

Our goal for the VS tools is to do all development against the local LOE and let the local LOE deal with sync'ing things back to the cloud.  All the parts needed to do that aren't ready yet, so for the PDC CTP we redirected the VS tools to upload and debug mesh apps in the cloud.

This makes the current chattiness (and the “glue” dialog box) much more acceptable to me since the end goal is to use the local REST API rather than the cloud API.

Update 2: Danny has posted a thorough response to this post.  There’s lots of great information there, so I won’t quote it all here.  One “aha” moment for me was the concept of separate debug application resources.  He also confirms that the “glue” dialog will be going away soon.  Go read it for details.  Thanks, Danny!