Thursday, January 15, 2009

Exploring Live Framework Triggers

The Live Framework has the ability to add triggers to resources.  There is some documentation on triggers here and here (pgs. 14-15), but after reading it I was left with more questions than answers.  So I took a deep dive exploring the nooks and crannies of triggers and this blog post is the result.

Overview of triggers

Triggers are scripts that can be executed before and after resources are created, updated, and deleted.  The scripts are written using Resource Scripts (AKA MeshScripts), a tiny DSL for working with AtomPub and FeedSync in Live Mesh.  Think of it as the T-SQL of Live Mesh.  MeshScripts be used as sprocs as well as triggers, but I’ll be focusing on triggers in this post.  See my previous posts for examples of sproc-style usage.

There are six triggers that can be attached to each resource:

  • PreCreateTrigger
  • PostCreateTrigger
  • PreUpdateTrigger
  • PostUpdateTrigger
  • PreDeleteTrigger
  • PostDeleteTrigger

The Create triggers run before and after each HTTP POST of a resource, the Update triggers run before and after each HTTP PUT of a resource, and the Delete triggers run before and after each HTTP DELETE of a resource.  This enables you to pack quite a bit of custom business logic inside a single call to the server.

Trigger parameters

The resource that you’re creating, updating, or deleting is accessible from inside each trigger as a script parameter.  For Create and Update triggers, the parameter is the actual resource sent from the client to the server in the POST or PUT request.  For Delete triggers, the parameter is the server’s version of the resource being deleted since a resource isn’t sent from the client to the server for delete requests (the client simply specifies the URL of the resource to delete).

Three steps are necessary to use a script parameter:

  1. Define the parameter
  2. Bind to the parameter from one or more statements
  3. Add the parameter to the script’s root statement

Here’s what this looks like using the syntax I created in my helper library.  I’ve bolded the three steps.

var param = 
mo.Resource.Triggers.PostCreateTrigger = 
            .Bind(s => s.CollectionUrl, 
                param, p => p.NewsFeedLink)
            .Bind(s => s.Request.Title, 
                param, p => p.Title)

The script snippet above adds a news entry to the news feed of the MeshObject you are creating (after it has been created, of course).  You can see this code in the context of a working sample in the download at the end of this post.  The sample also shows the equivalent “classic” syntax for the same trigger script.

Parameters are optional.  If you don’t need to access the original resource from your trigger script then you can safely omit all three steps and simply create a trigger script without any parameters.

There is only one actual resource parameter per script.  If you add more than one to the script, they are all treated as the same parameter.  This makes sense since all resource parameters are named “$Resource” under the hood.

There is another kind of script parameter called the ConstantParameter that lets you specify a name for the parameter, thus letting you to have more than one of them per script, but I have been unable to get ConstantParameters to work so we’ll ignore them for now.  I’m guessing they are used for looping statements which aren’t available in the current CTP.

Create/Update triggers

Create and Update triggers share many similarities, so I will cover them together.

Create and Update triggers are a one-shot deal.  You must attach new Create or Update triggers each time you Add() or Update() the resource.  Only the triggers appropriate for the HTTP verb are used.  So for POST, the Create triggers are executed but the Update triggers are silently tossed, and for PUT, the Update triggers are executed and the Create triggers are tossed.  By “tossed” I mean they aren’t executed, and the trigger is set to null in the response you get back.

In case it’s not clear, Create and Update triggers are not persisted on the server.  They only exist for the duration of the HTTP request/response.

Unlike sproc-style MeshScripts, the trigger script’s Source property becomes null after the script has executed.  At first I thought this was a bug, but then I realized that this was necessary so that if you then proceeded to call Update() on the item it wouldn’t re-run the same trigger again.

Just like sproc-style scripts, Create and Update triggers return the results of script execution in the Result property of the trigger script which you can inspect for details.  Use them immediately or lose them because they won’t stick around for subsequent requests.

Original vs. updated values

The script parameter for Update triggers contains the updated resource being PUT by the client.  If you need access to the original value that will be replaced by the PUT, you can access it in the PreUpdateTrigger using the following code, replacing MeshObjectResource with the appropriate resource type:

originalValue = S.ReadResource<MeshObjectResource>()
    .Bind(s => s.EntryUrl, param, p => p.SelfLink)

You can then bind to originalValue in subsequent statements.  Note that “param” in the sample is the trigger script’s resource parameter.

Delete triggers

Only Delete triggers have a non-null Source property after a POST or a PUT.  This is because only Delete triggers are persisted along with the resource on the server.  Delete triggers can be added to a resource using either POST or PUT.  Since Delete triggers are round-tripped (the Source doesn’t become null in the response), you don’t need to remember to re-add them on subsequent updates, unlike Update triggers.  However, they are re-persisted each time you do an update.  This means that you can remove a Delete trigger by setting it to null and calling Update().

Delete triggers are executed when you perform an HTTP DELETE on the URL of a resource that already has a Delete trigger added to it by a previous operation.  Since no actual resource is posted or returned by the DELETE operation, there is no way to examine the script results or learn about errors.

How triggers deal with errors

They don’t. :-)  To be more precise, errors are simply ignored.  They don’t cancel the POST/PUT/DELETE operation.  Similar to sproc-style scripts, no script Result is returned to the client if an error occurs.  Unlike sproc-style scripts, the error is not returned to the client.


While we’re on the subject of sproc-style scripts, it should be noted that sproc-style scripts are not transactional, and trigger-style scripts aren’t transactional either.  Sure, they may execute within the scope of a single HTTP request/response “transaction” but there is no rollback on failure.  Future releases are expected to include compensation/undo support.

Comparison to SQL triggers

Various databases support statement-level triggers and row-level triggers.  Statement-level triggers are executed once for a batch of rows resulting from a single statement, while row-level triggers are executed once for each row.  Statement-level triggers and row-level triggers attached to each table in the database.

While Live Framework triggers can inspect data “per row,” the triggers are actually attached to each “row,” not to each “table.”  And as you already know, only Delete triggers actually remain attached to the “row.”

This means that it isn’t possible to put triggers on feeds (the equivalent of tables) that fire when entries are added, updated, or removed from the feed.

And as I explain in the next section, you can’t currently modify the incoming data before it is added or updated, unlike with SQL triggers.

Parameters are read-only (I think…)

At first I was under the impression that the incoming POST/PUT data exposed in the parameter to the PreCreate and PreUpdate triggers could be modified and the modified values would be passed along to the actual POST or PUT operation.  I made this assumption based on the following quote from page 15 of this document:

"The output of the PreCreateTrigger can be data-bound to the actual POST request entity and the data is propagated dynamically in the request pipeline. Similarly, the response entity of the POST operation can be data bound to the PostCreateTrigger. A similar binding can be done using the PreUpdateTrigger to the request entity of the PUT operation and the response of the PUT operation and the PostUpdateTrigger. Note that such a model to flow the data dynamically between the PostDeleteTrigger script and the response entity is not applicable to the DELETE operation since we do not return response entity in the DELETE operation."

This sounds promising, but unfortunately I have been unable to find a way to update the script parameter.

The problem is that I can’t find a way to bind to the resource parameter.  The resource parameter is exposed as a StatementParameter, not as a Statement.  All of the Bind() methods that take a StatementParameter have the parameter on the right-hand-side.  This means that you can assign from a resource parameter, but you can’t assign to it.

So I tried binding to “Parameters[0].Value” on the root statement of the script, but that didn’t work.  Then I tried binding to the parameter using its secret “$Resource” name, but that didn’t work either.

Perhaps someone forgot to add the appropriate Bind() overload, or perhaps there’s another way to get at the parameter that I’m not thinking of.  But until this is sorted out, parameters are read-only, at least on my box.

Once parameters can be modified, it will be interesting to see if you can completely replace the parameter (even set it to null?), or only update properties on it.  It will also be interesting to see if you can delete the resource in the PostCreate trigger and return a completely different resource to the client.  This could be a useful technique for creating singleton Mesh objects.

Triggers and the local LOE

Triggers don’t work at all if you’re connecting to the local client LOE.  If you add triggers to a resource and then Add() or Update() it, the resource comes back with all its triggers set to null.  This makes sense because the ability to execute scripts inside the client LOE is expected to be added in a later release.

But not even the Delete triggers are persisted and propagated up to the server.  It turns out that Delete triggers also don’t propagate from the server down to the client.  This made me nervous, wondering what will happen if I update a client-side resource that has a server-side Delete trigger.  Will the absence of a client-side trigger clobber the server-side trigger?  Thankfully the server properly merges the client-side update with the server-side resource’s Delete triggers.  Must be some FeedSync magic.

Then I tried deleting a resource on the client that had server-side Delete triggers.  The resource was successfully removed on the server, but the server-side triggers failed to execute!  So synchronization bypasses triggers.

Speculation regarding client script execution

Once client script execution is added in a future release, how will this probably change the situation?

Create/Update triggers will run on the client if you connect via ConnectLocal().

Assuming synchronization of Delete triggers is fixed, you will be able to add Delete triggers on either the client or the server.  If you delete the resource via Connect(), the trigger will run on the server.  If you delete via ConnectLocal(), the trigger will run on the client.

But what if you want a trigger to always run on the server?  Perhaps the trigger accesses external resources that you are unable to access while the client is offline.  Or perhaps the trigger accesses resources that aren’t synced to the client such as Contacts, Profiles, or MeshObjects that aren’t mapped to that particular device.  Perhaps there could be a client-side queue of pending triggers that are synchronized up to the server?

Creating triggers inside of scripts

Officially, you can’t add triggers to resources from inside of scripts.  If you try, you will get the following error message: “Trigger can not be associated with a resource which is being modified using meshscripts.”  Hey, look!  They said MeshScripts!  Personally, I think that’s a far better name than Live Framework Resource Scripts, as you can tell from the titles of my previous blog posts. :-)

Anyway, it is possible to add Delete triggers to resources from inside of a script.  The trick is that you must copy them from a pre-existing resource, like so:

    originalCollection = S.ReadResourceCollection<MeshObjectResource>(ScriptHelper.MeshObjectsUrl)
    .WithQuery<MeshObjectResource, MeshObject>(
        q => q.Where(o => o.Resource.Title.StartsWith("Original"))),
        new MeshObjectResource("I have delete triggers"))
    .Bind(s => s.Request.Triggers.PreDeleteTrigger, 
        originalCollection, c => c.Response.Entries[0].Triggers.PreDeleteTrigger)
    .Bind(s => s.Request.Triggers.PostDeleteTrigger, 
        originalCollection, c => c.Response.Entries[0].Triggers.PostDeleteTrigger)

Technically, you can use this technique to add Create and Update triggers too.  This can be verified by inspecting the the script result and seeing that the resource was returned with Create and Update triggers containing the Source script that you specified.  However, these triggers don’t run.  Why not?

Scripts bypass trigger execution

Just as synchronization bypasses trigger execution, scripts also bypass trigger execution.  This is why our Create and Update triggers were added but didn’t run.

What happens if we use a script to delete a resource with Delete triggers on the server?  The script deletes the resource without running its triggers.

Consequences of bypassing triggers

If you choose to use Delete triggers, you must be careful to do all of your Delete operations through direct HTTP DELETE calls to the server.  Don’t use ConnectLocal(), and don’t use MeshScripts to delete resources.

This loophole could be useful in “oops” situations where you don’t want the triggers to run.

The bigger issue here is that you can’t reliably enforce server-side business logic.  I spoke with Abolade about this after his PDC session and he mentioned that perhaps the content screening hook points (used to block enclosures containing viruses and other inappropriate content) could be exposed to users for running custom business logic that is capable of rejecting content.  This could also be used to implement table-style triggers that are guaranteed to always run.  At first I thought this would be cool to have, but now I’m starting to think that such a server-centric feature isn’t an appropriate fit with the design philosophy of Mesh.  I may elaborate why in a future post.

Triggers on non-Mesh objects

Currently the root ServiceDocument at exposes Profiles and Contacts in addition to Mesh.  I think these are known as Federated Storage Services, but I’m not sure.  Contacts map out of the Mesh to your actual Hotmail contacts.  Anyway, you access /Profiles and /Contacts using the same resource-based programming model as the rest of /Mesh.  Anything that is a Resource can have triggers, so what happens if we add triggers to a Contact?

I added a new Contact containing Create and Delete triggers.  The Create triggers worked, but the Delete triggers weren’t persisted and therefore didn’t run when I deleted the Contact.

I’m guessing there’s a service integration layer that translates back and forth between Mesh’s resource-based programming model and external services.  The Contacts service probably doesn’t have a place to store arbitrary data such as triggers, so they get lost in translation.  But the Create and Update triggers can still run because they don’t need to be persisted anywhere, so they can live entirely in the world of Mesh’s resource-oriented request/response pipeline that wraps the calls to the Contacts service.  Hmm, maybe there are benefits to not having to persist triggers…  But it would also be nice to have a consistent programming model for Create, Update, and Delete.

Summary of limitations

There are a number of limitations scattered throughout this blog post, so here’s a more concise list:

  • Create and Update triggers aren’t persisted
  • No row-level/statement-level triggers on feeds
  • Trigger parameters are read-only (I think)
  • Can’t add triggers from scripts
  • Synchronization bypasses triggers
  • Scripts bypass triggers
  • Delete triggers don’t work on non-Mesh objects
  • Local LOE doesn’t support triggers
  • Triggers can’t reliably enforce business logic


You can download the sample code here.  The samples use my Fluent MeshScripts library with a few minor updates.

While writing this I discovered and fixed a bug in my library’s expression-to-string code when it encounters expressions such as “c => c.Response.Entries[0].Triggers.PreDeleteTrigger”.   I also created an AddParameters overload that takes an SResourceParameter<TResource>.

The code includes examples of using all the trigger types, creating a resource with triggers from a script, bypassing delete triggers with a script, triggers on Contacts, and the “can’t add triggers from meshscripts” error.


Besides providing some detailed documentation and code samples for Live Framework triggers, hopefully this post has helped you think about scenarios where you might want to use them, as well as provided some pointers on when to avoid them or use them with care.  I also hope this can be used to improve the usability and functionality of this powerful feature of the Live Framework.

Update: it appears that triggers don't work on DataFeeds and DataEntries. See Raviraj's post in this forum thread for details.

Wednesday, January 07, 2009

MeshScript Ideas for the Future

In my last post where I added LiveItem syntax to MeshScripts, I said I had some more ideas for MeshScripts.  Some of these ideas are very small, some are very big, and some are in between.  The reason I’m listing them here is that there’s no way I could get to even a fraction of them (I’d like to move on and explore other areas of Live Mesh), so hopefully they spark your imagination.

Enhancements for existing MeshScripts

These enhancements could be applied to MeshScripts without taking a dependency on the library I wrote.

Chunking and chaining

There is an upper limit on the number of statements the server will process in a single script.  It would be nice to implement automatic script chunking and chaining based on a configurable statement batch size.  For cross-batch bindings, outputs from one batch can be fed into inputs for the next batch.  Special care needs to be taken at CompoundStatement boundaries, especially with ConditionalStatement.  This could also be used to implement a progress indicator for large jobs while still preserving most of the performance benefits of batching.

Automatic parallelization option

I’m guessing most people are going to write their scripts using SequenceStatement as their CompoundStatement of choice.  It would be cool to have the option of automatically transforming and optimizing scripts by wrapping sections in InterleaveStatements where possible based on analysis of binding dependencies and URLs.  This ought to be a feature the user explicitly opts into.

Enhancements to Fluent MeshScripts library

These enhancements are specific to my Fluent MeshScripts library.

Use control flow MeshScript features

Note that ScriptContext’s record-replay model doesn’t need any “programmatic” script control flow features such as ConditionalStatement or the coming-soon LoopStatement.  Perhaps there’s an opportunity to add If/Else logic to the SLiveItem syntax.  This would require an expression tree visitor to touch up references to script statements since currently you must hard-code the statement ID/Name in the condition.

A switch statement could be added that uses multiple ConditionalStatements under the hood.

A similar sub-ScriptContext scoping solution could be used for generating InterleaveStatement sections.

Until we get a real LoopStatement, a fake Loop statement could be created that unwraps the loop a specified number of times.  This could be used for scenarios such as copying the last 10 items from a Twitter feed into a Mesh feed.

Miscellaneous enhancements

Create a helper method that enables single-round-trip Live Folder creation that returns LiveItems.

I haven’t investigated expansions yet, but it seems like there’s an opportunity to take advantage of them in MeshScripts, preferably with helper methods to simplify common scenarios, whatever those might be.

Taking MeshScripts in bold new directions

Here are some wild and crazy ideas that probably won’t come to pass, but wouldn’t they be cool?

Add batching to LiveItems

It would be nice if LiveItems had the option of operating either in batched mode (like my SLiveItem implementation) or real-time mode (as they currently do).  Perhaps this could be enabled through a new Batched property on LiveItemAccessOptions.  This means LiveItems would be able to speak both AtomPub/FeedSync and MeshScripts.

Escaping the MEWA sandbox

This next one is more of a “see if it’s possible” item than a new thing to implement, but it could open up some interesting new scenarios that would be ripe for additional library development.  It would be interesting to see if MEWAs can use MeshScripts to escape the MEWA sandbox, either by calling them as sprocs or as triggers.  Of course if this is possible, it’s likely it will be quickly disabled, but who knows, perhaps it’s acceptable.  I tried running a simple ReadCollection script from a Silverlight MEWA and got an exception trying to deserialize the result (no public default constructor), so I haven’t pursued it further.

Yahoo Pipes for AtomPub

Yes, building Yahoo Pipes for AtomPub and FeedSync would involve much more than just MeshScripts, but I think MeshScripts could play an important role in its implementation, especially with the forthcoming visual script designer.

This idea first came to me as I was experimenting with pulling in external Atom feeds using MeshScripts (it’s also possible using LiveItems).  Most feeds had formatting that broke the script, but a few external Atom feeds magically worked.  I thought, wouldn’t it be nice if there were a MeshAtomTidy service that touched up external feeds with the appropriate data to ensure they load nicely into the Mesh?

That would be great for read-only feeds.  Wouldn’t it be even nicer if you could map LiveID credentials to external credentials and access other AtomPub APIs such as Google Calendar, Google Spreadsheets, Picasa, and more?  It would also be cool if you could write a little bit of glue code or script to wrap an AtomPub API around non-AtomPub APIs such as Twitter.  Or even better, just select from a list of pre-existing AtomPub wrappers for popular services.

Next, I’d like to enable automatic synchronization between Mesh feeds and external feeds.  It’s not very exciting to send tweets while you’re offline, but offline access to the Google APIs is more compelling.  There may be a need for additional transformations and business logic in between which is where the full-featured suite of Yahoo Pipes modules comes in handy.

Popfly integration

The closest Microsoft equivalent to Yahoo Pipes is Popfly.  It has a similar set of modules and a drag-and-drop design experience.  Perhaps there is an opportunity to integrate Popfly mashups with Mesh feeds.  It would also be cool if you could package Popfly games as MEWAs that can run on your desktop or on your phone and maybe even sell them through a Live Mesh App Store, but that’s probably enough crazy talk for one blog post. :-)

MeshScript Queries, LiveItems, and Magic

I’ve continued to extend my Fluent Resource Scripts library in several interesting new ways.  I’ve added:

  • Strongly typed LINQ queries
  • Turning script results into LiveItems (MeshObject, DataFeed, DataEntry, etc.)
  • LiveItem syntax for scripts

I’ve had to resort to more significant hacks to implement these, and I don’t feel that these features are as solid or as complete as the features in my original Fluent MeshScripts post.  But I think these ones are far more interesting, so I hope you will look past the rough edges and imagine the potential if these features were done properly.  The LiveItem syntax for scripts is especially cool, if I do say so myself.


Both LiveQuery and ResourceQuery let you generate query strings from strongly-typed LINQ queries.  ResourceQuery is broken, which is unfortunate because its only generic parameter is of type Resource, which also happens to be the only generic parameter for most script statements.  This would have allowed us to implicitly pass along the statement’s generic parameter to our helper method without having to write any generic angle brackets.

So we have to use LiveQuery instead, which takes a generic parameter of type LiveItem (the non-generic LiveItem, not LiveItem<TResource>).  This makes it so that instead of calling my helper method like this:

   q => q.Where(o => o.Resource.Title.StartsWith("my")))

You must instead call it like this, specifying both the Resource type and the LiveItem type:

    .WithQuery<MeshObjectResource, MeshObject>(
   q => q.Where(o => o.Resource.Title.StartsWith("my")))

Oh well, it’s still useful, and once ResourceQuery is fixed we can switch to the shorter syntax.

I should note that these queries are immediately turned into query strings under the hood.  In other words, they are evaluated at script design-time, not at script runtime.  I have put in a feature request to support query generation at runtime.

Turning script results into LiveItems

So you’ve written a resource script, you run it, and you get some results back.  Then you think, “I’d like to do further work with the results I’ve gotten back.”  You dig into the individual statements in the script’s Result property, cast them to the appropriate statement type so you can then dig into the Resource property and examine the actual data that was returned.

But what if instead of working with results MeshScript-style, you wanted to work with them LiveItem-style, without having to do a bunch of tedious digging and casting?

DataFeed outFeed = null;
using (new ScriptContext(loe))
        ..., // create moStatement
            .AtUrl(moStatement, mo => mo.Response.DataFeedsLink)
            .SaveResult(ref outFeed)
DataEntry de = new DataEntry("new entry");
outFeed.DataEntries.Add(ref de);

I’m not quite sure what programming idiom to compare SaveResult() to since the result isn’t actually saved until the script is run.  The closest thing I can think of is a future.

SaveResult() creates a new LiveItem of the appropriate type, and after the script is run, this LiveItem is filled in with all of the necessary response information to give you a “live” LiveItem that you can start partying on right away without requiring another server round-trip to flesh it out.  This required a fair bit of reflection magic because by default LiveItems are essentially “DeadItems” until they’re associated with a LOE and Reloaded.

Notice I’ve added the use of a ScriptContext to enable the generated LiveItems to be automatically associated with an existing LiveOperatingEnvironment.

SaveResult() has been implemented for CreateResourceStatement, ReadResourceStatement, and ReadResourceCollectionStatement.  I have also added Statement extension methods ToMeshObject(), ToDataFeed(), and ToDataEntry() if you prefer to work that way.

When you combine ReadResourceCollection with SaveResult() and the query support above, you can now batch multiple LiveItem queries.

LiveItem syntax for MeshScripts

So we’re starting to get a decent bridge between the world of MeshScripts and the world of LiveItems.  But man, that MeshScript syntax is still pretty nasty, even with the fluent stuff I’ve added.  Wouldn’t it be nice if you could write a MeshScript the same way you write LiveItem code?

using (new ScriptContext(loe))
    var mo = new SMeshObject("original title");
    var feed = new SDataFeed("first feed");
    var feed2 = new SDataFeed("second feed");
    var feed3 = new SDataFeed("third feed");
    mo.Resource.Title = "script-generated title";
    mo.Resource.Type = "LiveMeshFolder";
    feed.Resource.Type = "LiveMeshFiles";
    feed2.Resource.Title = feed.Resource.Title;
    feed3.Resource = feed.Resource;
    var entry = new SDataEntry("my entry");

The only syntactic differences in the code above are the S-prefixes on the various SLiveItems, the absence of ref parameters, and there are no explicit calls to loe.Mesh.MeshObjects.Add() for the SMeshObject, although that extra syntactic hoop could easily be enabled.

Yes, that code results in just one round-trip to the server.  “You’re kidding,” you say?  “Where’s the man behind the curtain?”  Let’s see that again, annotated with comments.

using (new ScriptContext(loe))
    // CreateResourceStatements
    var mo = new SMeshObject("original title");
    var feed = new SDataFeed("first feed");
    var feed2 = new SDataFeed("second feed");
    var feed3 = new SDataFeed("third feed");
    // set properties at runtime using ExpressionBindings
    mo.Resource.Title = "script-generated title";
    mo.Resource.Type = "LiveMeshFolder";
    feed.Resource.Type = "LiveMeshFiles";
    // bind sFeed's Response.Title to sFeed2's Request.Title
    feed2.Resource.Title = feed.Resource.Title;
    // bind sFeed's Response to sFeed3's Request
    feed3.Resource = feed.Resource;
    // bind DataFeedsLinks to CollectionUrls
    // CreateResourceStatement
    var entry = new SDataEntry("my entry");
    // bind DataEntriesLink to CollectionUrl
} // automatically run the script

In ORM terms, the ScriptContext now functions as a UnitOfWork that automatically saves all items created or modified within its scope.  You can think of it as a script recorder that replays what it has recorded when it’s done.  In addition to the LOE parameter, ScriptContext also takes an optional RunLocality parameter that determines whether the script is executed by the client or by the server.  It defaults to running at the server.

The CreateResourceStatements were surprisingly straight-forward to implement.  So were the Add() methods.

The property getters and setters required a bit more magic.  String-based property getters all return a magic string that specifies the binding.  String-based property setters generate a PropertyBinding if they are passed a magic string, otherwise they generate a constant ExpressionBinding with the string they are given.  SResource-based setters generate a PropertyBinding.  The magic string approach could also be used for Uri-based properties, although I didn’t implement those in this prototype.

At first I generated script statements in the order that the SLiveItems were created, but later I added dependency-tracking so that property assignments and Add() calls re-order the statements if necessary.

Another advantage of late SLiveItem statement execution is that you won’t run into NullReferenceExceptions if you Add() SLiveItems to other SLiveItems in the “wrong” order.  That forum thread was actually part of the inspiration for imitating LiveItem syntax.

In that same forum thread I also detailed how LiveItems lead a dual life as a request and as a response.  A similar duality exists with SLiveItems.  When assigning properties, the object on the left-hand-side sets properties on its request, while the object on the right-hand-side gets properties on its response.

Each SLiveItem has a Result property that returns a “live” LiveItem once the script has been run by explicitly calling ScriptContext.Current.CreateScript().Run().  This uses the SaveResult() technique described earlier.

Each SLiveItem exposes a WrappedStatement property that lets you make changes to the underlying Statement if you need that escape hatch into script-land to touch things like the request resource.

Using SLiveItem syntax with triggers

With these tools in our bag, we are now ready to make triggers more accessible.

First, I created a new type, SResourceParameter<TResource> and a factory method, S.ResourceParameter<TResource>().  This enables a more strongly-typed Bind() call than what I had in V1.

Then I updated the Bind() methods to tell the ScriptContext about any ParameterBindings so they can be automatically added to the auto-generated root script statement.

Finally, I decided to try out yet another syntax for Bind() that looks like Set().EqualTo().  It exists as a method on SLiveItem instead of as an extension method for Statements and I only implemented it for Resource ParameterBindings, but it could just as easily be applied to all of the other binding types.

This lets us write:

using (new ScriptContext(loe))
    var originalObject = new MeshObject("Original object");
    var triggerParam = S.ResourceParameter<MeshObjectResource>();
    var triggerCreatedObject = new SMeshObject("trigger-created object");
    triggerCreatedObject.Set(s => s.Request.Title).EqualTo(triggerParam, p => p.Title);
    originalObject.Resource.Triggers.PostCreateTrigger = ScriptContext.Current.CreateScript();
    loe.Mesh.MeshObjects.Add(ref originalObject);

The Set().EqualTo() syntax isn’t as nice as the LiveItem property getter/setter syntax, but Set().EqualTo() was quick to write and I didn’t feel like taking more time to write the wrapper that would enable plain old property support.  I’m sure it could be done.

What remains to be done?

The LiveItem syntax turned out to be quite a bit more work than I was expecting, and it’s still nowhere near being done.  In fact it may have some major flaws that necessitate a rewrite, I’m not quite sure yet.  One major issue is that it currently only supports CreateResourceStatements.  Adding support for other statement types could have a huge ripple effect.  The use of magic strings is another design decision that may cause headaches down the road, but so far I’m getting away with it, and the alternatives aren’t nearly as nice to use.

I started out without any generics in the SLiveItem and SResource base classes but later introduced the rather insane explosion of generics to consolidate repetitive functionality from the derived classes.  It may be desirable to first undo the base class generics to create some mental breathing room and then add more functionality.

SLiveItem wrappers need to be created for Contact, Mapping, Member, Device, News, and Profile.  This may require creating additional wrapper classes for the helper classes they depend on such as NewsItemContext.

As mentioned in the previous section, Uri properties need magic string support, and ParameterBindings need LiveItem-style property assignment support.

Once the ResourceQuery bug is fixed, WithQuery should use it instead of LiveQuery.  If the runtime query feature is implemented, this should also be supported.

SLiveItem needs to implement Update() and CreateQuery().

The current implementation freely reorders statements.  It also assumes that when you’re doing property assignments, everything on the left-hand-side is a request and everything on the right-hand-side is a response.  If you set the same property more than once, only the last set sticks, and getting the same property at different points always returns the same value.  This may result in unexpected behavior if the user is depending on the same property having different values at different points in the script.  This could be dealt with by generating more than one statement per SLiveItem and/or by generating AssignStatements.

Per the comments in the code, my usage of ETags when generating LiveItems from script results may not be correct.  There are also a number of other TODOs in the code comments.

Last but not least, this needs unit tests.  More interfaces such as IScriptContext probably need to be created to enhance testability.

I’m sure there’s more I’m forgetting.  I told you, it’s a lot of work! :-)


You can download V2 of the Fluent MeshScripts library here.  I have enhanced the console app with additional examples that demonstrate most of the new features in this post.

To run the sample you will need to change the username and password.  If the project references to Microsoft.LiveFX.Client.dll, Microsoft.LiveFX.ResourceModel.dll, and Microsoft.Web.dll are broken, you will need to remove and recreate them in both projects.


I was going to discuss future directions for this library and for MeshScripts in general but this post has gone on for long enough so I’ll save that for my next post.

Hopefully my enhancements help you visualize more possibilities for MeshScripts.  At the very least, they should make MeshScripts much easier for you to write and work with.

As always, I’d love to hear any feedback.