Onepiecejigsaw is a small, flexible and dynamic company specialising in web development.

topics

Object.keys

added by Andy

06-12-2016

Working with JavaScript a lot, it's often necessary to traverse a JSON object or array, looking for something in particular. Sometimes the object or array you're working with has been given to you by a third party system, and therefore you don't have any control over whether it's one or the other... an array is typically easier to work with in some ways, especially if it's an array of objects....

Let me show you some examples:


var array =
[
{
price: 4.8,
name: 'My first book'
},
{
price: 12.6,
name: 'My second book'
}
]

This is an array of objects, so it's dead easy to pick one out by its index. For example, grab the first object of an array:


var firstBook = array[0];

However, if we're talking about an object which contains sub-objects, like this:


var obj =
{
book1: {
price: 4.8,
name: 'My first book'
},
book2: {
price: 12.6,
name: 'My second book'
}
}

This is a little more difficult. Enter Object.keys! This neat little tools returns an array of all the first-level key names of a JSON object. Therefore, to grab the first sub-object of a JSON object, you just need to do this:


var firstBook = obj[Object.keys(obj)[0]];

This is grabbing the name of the first sub-object (Object.keys(obj)[0]) and passing it to obj[] in order to grab the right sub-object out. Job done!

XML Serialisation

added by Andy

30-09-2014

Part of my final end-to-end MVC framework relies on a simple, effective feature of .NET that I think deserves its own special mention. There are built-in functions that allow a .NET object to be serialised into XML, and vice versa.


Application / Database interface
Using XML serialisation is the best way I've found to pass complex data structures between .NET and SQL Server. It allows multi-level .NET objects to be serialised into a solid XML structure, and passed into SQL Server where the data can be read by using XQuery. Similarly in reverse - by using the FOR XML PATH enhancements to a normal SQL query, you can ask SQL Server to spit out XML (even just as a string) and then pick this up in .NET for deserialisation straight into a .NET object.

Serialisation
Imagine this .NET Class:



public class User
{
public int ID { get; set; }
public string Password { get; set; }
public List<Email> Emails { get; set; }

public User()
{
_Emails = new List<Email>();
}

}

public class Email
{
public int ID { get; set; }
public string Context { get; set; }
public string EmailAddress { get; set; }
public bool Primary { get; set; }

public Email(string Context, string EmailAddress)
{
_Context = Context;
_EmailAddress = EmailAddress;
}

}


We have the main class (User), with properties. One of these properties is a List of objects based on a sub-class (Email). This allows us to nest information and store any number of child Email objects within one User object. Let's define an object:

<Insert Object>

... and then serialise this into XML:

<Insert XML>

Notice that the XML tag names have been automatically set here. The root tag is the name of the class (<User>). The 'Emails' property also lends its name to the relevant tag, as do the child class names of this property (<Emails> and <Email> respectively).

Easy as that! This XML can then be passed into SQL Server by using the SqlXml object:

<Insert details>

Once in SQL Server, there are many things we can do with this data. XQuery is agile, but it's not incredibly efficient. This is best used for relatively small chunks of data. You wouldn't want to use this method to pass in information on a 200,000-strong userbase! But a handful is perfect. A few examples of XQuery that are useful here are:

<Insert XQuery>



Deserialisation
Perhaps more surprising is how easy it is to do this in reverse. Say we wanted to pull some user information out of SQL Server, and populate a User object as above. We can write a query like this:

<Insert SQL query with FOR XML PATH>

Because we know what tag names .NET is expecting (i.e the same ones it used when it serialised the data above) we know how to build the XML in SQL Server so .NET can deal with it. To do this, we use the FOR XML PATH ('xxx') ROOT ('yyy') construct, and we also make use of nested queries to form the nested structure of the XML. As soon as we have a query that outputs XML in the same format as .NET created above, we know we're onto a winner.

So we can deserialise this straight into a User object, and it works fine.

<Insert example>

The one thing this doesn't do is validation of the properties. If it fails to map a piece of XML data into a property, it'll just miss it out. This means that validation of individual pieces of data must be handled by the application or the database at one end or another.



XML tag names
You saw above what the default tag names will be for any specific object being serialised into XML. One point to note is what happens when you work with an object which doesn't fit in with the convention above.

Above, we had:
User - a class
Emails - a named property of the User class
Email - a sub-class

In this instance, Emails is a List(Of Email). But what happens if we define a List(Of xxx) in code, and then serialise that into XML? This List variable will have a name, surely, but if it's not a pre-named property then what happens to the XML?

<Insert definition of List (Of User)>

If we serialise this, we get:

<Insert XML>

This is important to remember, as quite often we might want to do this in reverse. If we want to output multiple users' information from SQL, we must remember to put FOR XML PATH ('User'), ROOT ('ArrayOfUser') at the end of the query.

LockBox: The best application I will never build

added by Andy

26-08-2014

I've been spending a lot of time recently thinking about my next big project. I've been expanding my set of skills on a variety of other people's ventures, but I'd really like to start building something that I can call mine. I wrote in my last couple of posts that I've been working with Angular.js - this is for a project - the brainchild of a friend of mine - which I've come into very early on, and in fact the development work itself is being done solely by me. So I guess that's a start... but still... the mind wanders.

Every project I think of seems to hit the same initial stumbling blocks. Assuming it's a good idea, the biggest consideration nowadays seems to be security. Most specifically, security of information. It started to dawn on me that the usefulness of applications these days is not limited by technology any more. It's limited by security, trust and freedom of access to information. Imagine what kind of applications we could build for people if security wasn't an issue any more, and (most) people would be happy to sign up to a system that had restricted access to their location, their card details, and their home address.

My dream is thus (with many caveats, explained):

LockBox
- LockBox is an application that runs on a Virtual Private Server (VPS). The build is a hardened Linux web and database server. The VPS is owned by the LockBox user, and only their data is stored on the VPS.
- A LockBox server can be specced up to 320GB of disk space, and can be used as a DropBox-replacement file storage system. The files are not stored anywhere else in the 'cloud', meaning that they only ever reside on hardware owned by the user.
- The LockBox database can store card details, personal information about the user, photos, etc. - the great thing about the information being stored here and only here is that the user only needs to change this information in one plase should it change. Move house, and only change one address.
- The real crux of this solution is this: Third party websites and applications use a secure RESTful API to your LockBox to grab your up-to-date information. CAVEAT: I don't know exactly what technology they would/could use to make these calls. Could it be desirable for them? Could we make sure they didn't store the info once they grabbed it?
- The LockBox user can control all access using a smartphone app. Has a third party abused their access priviledges? Cut them off at a click of a button. CAVEAT: See above, about avoiding them storing the info!
- Login to participating sites would also be faciliated through LockBox. This would entail a two-factor login process, relying on a smartphone app to provide a secure key. CAVEAT: The sites would have to sign up to this too, although centralised login is widely used nowadays, e.g. Facebook.

The Problems
Among all the unknowns here is one glaring issue that I see. In my mind, the eventual product is fairly well defined, and could add real value to both consumers and online business owners. The problem is how to get to that point. The idea relies on a certain sized userbase before it's worth businesses signing up to the pain that integration with LockBox would bring. And in order to get that userbase interested would require LockBox to offer some serious advantages before these other integration features were fully fledged. Maybe the Dropbox-replacement feature might fly, but the man who goes directly up aginst Dropbox is a brave one.

For now, I think LockBox might have to remain a dream. I don't know - maybe it's a rubbish idea after all. Either way, if you read this and fancy having a pop at creating it, be my guest. And good luck!

End-to-end MVC Framework - Part 2: The Problem with TVPs

added by Andy

16-06-2014

In Part 1, I constructed (what I think is) a really good end-to-end model structure, from client through API to database. I used JS objects on the client side, .NET objects within the API layer and web application, and Table-Valued Parameters (TVP) in SQL Server.

I was really excited about the consistency this gave me, and was looking forward to working with the benefits offered by such a rigid structure.

I also had a feeling that I might have missed something important. I had.


The Problem

Using TVPs to pass .NET Objects straight into SQL Server, and effectively pull a .NET Object straight out again, is all fine unless your .NET Objects have any kind of hierarchical structure. TVPs are flat, just like a single table. You can't nest them, and more importantly you can't nest the SqlDataRecord object that is intrinsic in allowing the two way conversation mentioned above.

I have a User object, and this User can have multiple Emails. Each Email can have a few different properties. I facilitate this structure in .NET by having a User class, with an Email child class. This child class is a List(Of Email), where Email is another class.

Email isn't the only property of User that needs to work in this way. You can imagine that very quickly we're relying on a User object that won't fit nicely into any flat table structure.


The Solution

Enter XML. I feel like I've gone round the houses a bit to end up here, but I always see the value in making these journeys of discovery oneself! A wise man once told me that SQL Server was basically born to spit out XML.

OK, maybe he didn't say it that strongly, but certainly there's a really easy way to convert a Stored Procedure to respond to a query with well-formed XML. It allows you to nest the queries to create multi layer XML, and choose the tag names for each level. At the lowest level, the tags assume the name of the field they represent.

Luckily, .NET also contains some nice XmlSerializer functions which allow our .NET Object to be converted to and from XML! Once we know exactly how the field names should be set up, we can write a set of functions to take care of this for us.

At the moment, I have a SP that accepts XML for a User object as described above. I use XQuery in the SP to pick it apart, do what I need to and then return XML which gets converted into a .NET Object. This object is actually comprised of three child objects (User, KeywordList and TopMenuItemList). There's no way I could have done this with TVPs, unless I made multiple calls to the database. This is all done in one fell swoop!

I'll share some of the details of all this in my next post. For now, I'm going to build this out a bit and see where we end up.

End-to-end MVC Framework - Part 1: The Concept

added by Andy

28-05-2014

I've been working a bit with Angular.js recently. It's a Javascript framework that fits the MVC model (or, more accurately, MVx - the definitions of this family of architecture have become blurred with the introduction of so many similar frameworks).

So far, it's working very well for what I want. I'm building a true web application - it's something for people to use as if it's an application on a mobile device. This means it doesn't need to be searchable, it doesn't hold a lot of content that's available to the masses, and it involves a lot of user-specific chat between the browser and the server.

The main downside I've seen to using client-side JS frameworks such as Angular.js is that the resultant website doesn't play very well with Googlebots. (This article explains this very well). For my application, this isn't a problem. There may be other downsides that come to light further down the line, but for now I think the architecture I've plumped for will work well. It looks like this:



You can see above the different elements at each different level of the application. I've read a few posts and articles that tackle the question of "where should my M, V & C exist?". In the example above, I'm using a JS framework that contains all three elements, as well as a .NET template that claims to be an MVC framework (Web API, or MVC4). While laying down my architecture, it appeared to me that it would be a great idea to have all three levels (Client, Server and DB) aware of the same data structure. In other words, a Model that is pretty much identical from one end to the other.


The Concept

The relationship between Client, Server and DB is all about passing data around. The Web API framework for the Server application is designed to talk natively to JavaScript. It can accept JSON in a POST request, and convert it directly into a .NET object if one exists in the correct format. Also, its default response type is JSON, and if you pass a .NET object straight out to the Client, it will be formatted as JSON.

This led me to investigate if the same relationship could be forged between Server and DB. I did some reading, and discovered Table-Valued Parameters (TVPs). These are a specific kind of User-Defined Type in SQL Server. Essentially, a TVP is a table structure, much like a normal tabe definition. It is therefore quite similar to a .NET class, or more specifically a list of .NET classes. If you imagine a simple .NET object represented as a row in a table, then a populated TVP could represent a list of such objects.

I was now getting close to an end-to-end solution, which involved the ability to pass information in a standard structure from one end to the other. In other words, this:



I've split the .NET level into two - the Web API which deals with responses and requests from and to the Client, and the back-end Application.

Each of the annotated data transfers deserves a little explanation:

A shows a typical POST request, passing a JS Object (as JSON) directly to the Web API URL. As it is received by the API Controller, we simply tell the API Controller it is receiving a .NET Object, and it populates a new .NET Object with data. No Newtonsoft in sight!

B is a bit more tricky. In order to pass a .NET Object straight into a pre-defined TVP, we have to pass it as a SQL Parameter. The type "SqlDbType.Structured" is the one we're after, but that only accepts data in a select number of formats. One is DataTable (which I'm trying to avoid... I think this defeats the object somewhat) and one is IEnumerable<SqlDataRecord>. The short answer to this problem is that I can create a List(Of .NET Object) and implement IEnumerable<SqlDataRecord>. I populate this whilst the .GetEnumerator method is run (by SQL Server), therefore creating a populated object that can be passed straight into SQL Server. (The long version, where I got this idea from, is here)

C is essentially the same as B, in reverse! I pulled it out separately here as it's actually possible to pull multiple result sets out of the Stored Procedure, and use each one to populate a separate object. More details on this later.

D is another great feature of Web API - you can pass fairly complex .NET Objects straight out to the browser, and what hits the Client is well-structured JSON. A dream come true for a Client-side Controller!


The Proof

I got all this working for a simple login function:



(In fact, this is very slightly different to the example in Diagram 2 - I don't use a POST for this functionality. As this is a RESTful API, I've stuck to the proper verb, which is GET. I pass the Email and Password directly in the URL).

It works exactly how I had hoped it would - I can pass data in an HTTP request to the Web API, which remains in .NET Object form all the way to SQL Server. It gets passed to SQL Server as a SqlDbType.Structured parameter, where it is received as a populated TVP. SQL Server performs its actions, and returns multiple result sets to .NET. These result sets each populate a sub class of a new .NET Object, which is passed all the way through to the Web API as a single object. This appears in the HTTP response as JSON, which can be dealt with by JavaScript in whatever way necessary. In my case, I split it up into its component parts and store them in the Client's Model, for use in the application.


The Architecture

This relies on a fairly strict architecture before it works well. The JavaScript end doesn't require much pre-defined structure as it's pretty flexible, but both .NET and SQL Server need classes and TVPs (respectively) to be pre-defined.

In the example above, the multiple result sets that are returned from SQL Server to .NET need to know what type of objects they are populating. I have opted to pre-define a class that includes sub classes for User and Keywords. This means that the functions I've written at the Data Access Layer (DAL) can be pretty standardised, and will be used for multiple functions within the application. In other words, I tell the function what object needs to be returned, and it uses Reflection to figure out what the sub classes are, create the relevant objects, and populate from the SQL Server response.

The thing I really like about this is that I haven't had to user manual serialisation at any point, nor have I had to create DataTables in order to hold and transfer data. At all levels of the framework, the Model is stored in a native structure, and it feels like passing the data around in this way will lend itself to robust and relatively bug-free operation.

Whether this has come with unintended consequences, such as poor performance or code that is near impossible to work with, remains to be seen... The next part of this article will be written once I've explored how this framework behaves under duress, in a real life application. Wish me luck...