Latest Entries »

So, I’ve been working on a MVC cloud app for a while now. As I have work with mostly forms based authentication in the past, and decided to use my own membership provider (not out of the box provider), I continually had issues with obtaining user authentication, and receiving user information in order to confirm proper authentication on the controller level.

As far as creating the cookie for authentication, this isn’t an issue. First we need to (on login), authenticate the user based upon submitted values, and then we can set the cookie.

Code Snippet
  1. FormsAuthenticationTicket ticket = new FormsAuthenticationTicket(1, userString, DateTime.UtcNow, DateTime.UtcNow.AddMinutes(5), createPersistentCookie, FormsAuthentication.FormsCookiePath);
  2.                     string encTicket = FormsAuthentication.Encrypt(ticket);
  3.                     Response.Cookies.Add(new HttpCookie(FormsAuthentication.FormsCookieName, encTicket));

Comments on above: 1 is the formauthentication version, userString is a variable that contains the information you want to store in the name variable, the next two dates are the create date and expiration date,  createPersistentCookie parameter is a boolean value, and will determine if the cookie persists passed the session, and the last returns the location for the formsauth cookie path.

There’s a lot of debate on where specifically, user level determination, authorization, and delivery should be concerned, and how it should be tackled. I prefer to have my controller handler this, and thus, understanding who the user is, is important at that level. Identity is not populated within the structure until after initiation, and thus, makes it tough to obtain this information without some hacking. Fortunately I found a couple solutions.

You can create a controller action that obtains this information and then passes it on to your other controller as needed. Another option was to write a new controller that would inherit the controller type, and add a username to it, and whatever controller needed this information, it could just be a lookup value as a shared string (or whatever datatype you intend your identifier to be!). It is important to note, some values are not allowed to be null, so handling that situation in the else block is up to you depending on type.

Here’s the code:

Code Snippet
  1. public class UserAwareController : Controller
  2.     {
  3.         public String CurrentUser;
  5.         protected override void Initialize(System.Web.Routing.RequestContext requestContext)
  6.         {
  7.             base.Initialize(requestContext);
  9.             if (requestContext.HttpContext.User.Identity.IsAuthenticated)
  10.             {
  11.                 CurrentUser = requestContext.HttpContext.User.Identity.Name;
  12.             }
  13.             else
  14.             {
  15.                 CurrentUser = null;
  16.             }
  17.         }
  19.     }


Now, all you need to do, to obtain the user information you desire (which can be changed in the initialize method), you just need to reference the variable within the controller. just like a typical string.


Code Snippet
  1. public JsonResult _getUser()
  2. {
  3.     return Json(CurrentUser);
  4. }


There’s a lot of reasons to desire this information on the controller, and this modification allows you to do it on the MVC side of things.

There isn’t a particular source I used for this, looked at a LOT of posts the last couple days to determine which way was the best way for me, and my needs on this application.


Markup validation, its part of developing websites, but there comes a point to when you put off validation for experience. It’s a very common problem, which side to take, but I always try and look of alternative ways of solving issues, while still validating properly. The main question here though, what standard do you validate against currently, and when does this standard change? There are many standards to write against, but most common are xHTML/ 4.01. Each standard comes with different changes, and clarifies new ‘no-no’s.

I think it’s important to stay consistent, as well as, move forward when a standard comes more prevalent in browsers. I do like the idea of Modernizer, taking on some of this for us, but ultimately, we are responsible for making our code work well in the highest percentage of browsers possible.

So what do you do? How do you base your move to a new validation? Do you prefer in-development environment validation, or live, via an external site? What validation rules are you writing your html against?

When working with WCF services, as well as, Azure storage, you sometimes will need to install support packages for certain dependencies, as well as, SDK’s. These sometimes can make some configuration alterations to cause issues with current services you already have configured. Such was the case with a set of WCF services that I had written, and began debugging.  The error stated:



Ultimately, this issue really is more of an annoyance for testing, more than anything. The important thing is, it is easy to fix! Depending on what version of .NET you are writing against, differentiates where this will need to be fixed. In my case, it was:


You can search the file for the <client> block. Once found, just comment out the entire block, so if in the future you need it, you can uncomment it. After saving the file, go ahead and try retesting your WCF file, and you should no longer get the error!

Within WCF services, there are many different return types that you can use in order to get data from different dependencies within a service. One problem, though, when working with WCF you might run into, is that WCF doesn’t support usage of a Dictionary as a return type. Although there are many reasons for this (which I will not go into now), but there are also different ways of dealing with this. All the examples I found online seemed like reasonable options, but I also had a different idea.

Since WCF allows for a List return type, why not convert a dictionary to a list, return it, and convert it back into a dictionary? Having two methods, one that converts from a dictionary to a list, and another that converts back, would make this very easy to do.

Below are two methods for doing just this!

Code Snippet
  1. private static string separationString = "!|!";
  3.     public static List<String> fromDictionaryToList(Dictionary<Int32, String> dictionaryToConvert)
  4.     {
  5.         List<String> returnList = new List<String>();
  7.         foreach (KeyValuePair<Int32, String> i in dictionaryToConvert)
  8.         {
  9.             String stringToAdd = i.Key.ToString() + separationString + i.Value.ToString();
  10.             returnList.Add(stringToAdd);
  11.         }
  13.         return returnList;
  14.     }
  16.     public static Dictionary<Int32, String> fromListToDictionary(List<String> listToConvert)
  17.     {
  18.         Dictionary<Int32, String> returnDictionary = new Dictionary<Int32, String>();
  19.         string[] separator = new string[] { separationString };
  21.         foreach (String i in listToConvert)
  22.         {
  23.             string[] convertedStrings = i.Split(separator, StringSplitOptions.None);
  24.             Int32 itemInt;
  25.             Int32.TryParse(convertedStrings[0].ToString(), out itemInt);
  26.             String itemString = convertedStrings[1].ToString();
  27.             returnDictionary.Add(itemInt, itemString);
  28.         }
  30.         return returnDictionary;
  31.     }

So, how do we test this? Pretty simple… just create a new dictionary with some values, send it through both, and debug to see what the values are. Here is an example of the pageload of an aspx form:

Code Snippet
  1. protected void Page_Load(object sender, EventArgs e)
  2.     {
  3.         Dictionary<Int32, String> dictionaryTest = new Dictionary<Int32, String>();
  5.         dictionaryTest.Add(1, "Clearly The First Dictionary Item");
  6.         dictionaryTest.Add(2, "The Second Dictionary Item");
  8.         List<String> convertedDictionary = fromDictionaryToList(dictionaryTest);
  10.         Dictionary<Int32, String> convertedList = fromListToDictionary(convertedDictionary);
  12.     }


So, once we setup a debug point, and run our project, it should stop at the end of the Page_load function. Once running, we can see that the values come through, no problem.



Although I implemented this with a string and integer, since Dictionary is a generic class, you can specify whatever types you need to implement. You will then need to modify the functions to support your desired data-type.

This error came up today, when I was testing my WCF services for a new website that I am writing. The ultimate issue here is the mapping of the interface to the service in question, and usually points to a possible mistype or rename of something within your service architecture. The primary message received here can be:

“The type ‘OdatServiceAzure.SiteService’, provided as the Service attribute value in the ServiceHost directive, or provided in the configuration element system.serviceModel/serviceHostingEnvironment/serviceActivations could not be found. “

So, how do we conquer this error? Well, if you are debugging a program with a bunch of different services, it’s probably best to debug each service, one by one. Select the service you want to test and run it. If you can run it completely, there shouldn’t be an issue with pairing the interface and service. If there is, you may see an error like that shown above. The primary place you want to look is at the markup for the service itself.

Right click on your service, and click ‘View Markup’.


Make sure your namespace and service name here match your service name itself. In my case, I had renamed my namespace due to an issue with it, and my services for some reason, did not rename within the .svc markup.

For the couple months, I have been working on testing some cloud related applications to fully take advantage of cloud storage, shared sessions across different nodes, and cloud to location via a virtual network utilizing Azure Connect. Each presents their own challenges. Finding a provider that has a high level of uptime, and the ability to scale is very important. So far I have been pretty happy with the ability of Windows Azure to scale to different app sizes, as well as, have all the technologies that seem to bridge the gap for each section.

Cloud related technologies truly can be difficult, sometimes, to truly understand how to handle some things that are often taken for granted when the servers are inside a network, and completely managable locally. Simple things like, remoting into a machine, all the way to, managing session state between nodes that have been extended.

Two other obstacles are to maintain dependencies on servers that require certain software to be installed on the server at runtime, as well as, ensure greater security of important business logic and services required for a software solution. The two major ways of dealing with this type of issue is to either:

A) Deploy custom images to the cloud that are prebuilt with the software you need to run in the cloud.

B) Make a library available outside of the cloud, that can be called from the cloud, with software dependencies and requirements that might not be feasible in the cloud.

Each has it’s own positives and negatives.

First, a custom cloud image can take some time to deploy. This cloud image, not only needs to be deployed, but updated after deployment. Cloud images contain server software that is critical to keep up-to-date, and it can take a while, even if already imaged, to deploy said image just due to maintenance. Uploading these deployments can also take some time, since these images can be several gigabytes in size.

Second, a local library extends the amount of moving parts that are involved in the system, making it harder to debug some issues, due to the different areas of breakage. One nice thing, though, exceptions thrown locally will be available for viewing on the event viewer, so you essentially have another tier of error reporting, off of the cloud (for dependencies within the said module). A big positive here, you can leverage the ability to maintain certain libraries and services that have software and dependency requirements that just don’t make sense for the cloud. For example, you have an order module that sends information to a backend system, but requires 2-3 other dependencies that lie on your network. A cloud solution really isn’t feasible, since you would have to make multiple connections, as well as, would be required to not only install software on a custom image and deploy it, but test connection between these nodes, on every instance, as well as, ensure the remote nodes can handle so many different connections between instances as they grow.

Keeping this in mind, sometimes it makes sense to go the second route, to connect a remote library or service to your instances to provide those much needed dependencies, reliably and responsibly. To do so, we really need to have a connector to ensure services can be accessed from within a network, this is where Azure Connect comes in.

Azure connect is a program that works with Windows Azure, to help bridge the gap between endpoints (your servers, on your own network), and your published software. These are the steps I took to setup Azure Connect between my local desktop, and Azure Connect:

  1. Created the WCF project and published it via localhost to a directory on my local computer.
  2. Created a cloud project that accesses the local service based on its qualified name [computer].[mydomain].com.
  3. Ensured it would properly work on localhost, as well as, the service accessible to other computers on my network.
  4. Published the cloud project and ensured that it is working fine without the connection to the endpoint being referenced.
  5. Downloaded the Endpoint client from the Azure portal and installed & confirmed I am connected.
  6. Moved my Endpoint into a group.
  7. Connected the group to the Role I want to connect to

The tutorial I followed, though, doesn’t go into how to troubleshoot nodes, nor how to ping external entities. To enable ping, you must have startup command to open the firewall to enable ping. To do so, create a file ‘enableping.cmd’, and insert the following into it:


Code Snippet
  1. Echo Enable ICMP
  3. netsh advfirewall firewall add rule name="ICMPv6in" dir=in action=allow enable=yes protocol=icmpv6
  5. exit /b 0


Now that you have the file, we must tell the application to run this command on startup. On your service definition inside your endpoint setup project, inside your WebRole configuration:


Code Snippet
  1. <Startup>
  2.       <Task commandLine="enableping.cmd" executionContext="elevated" taskType="simple"/>
  3. </Startup>

This will allow you to ping to the cloud. To enable pinging from the cloud to your own server, you MUST run the same command on your server. To do so, you can open a command prompt window and insert the same command:


Once this is done, you should be able to ping back and forth between your server and the cloud, if not, there is something else wrong with your network setup. Ensuring the Azure Connect icon is showing that you are fully connected, and there are no issues, on both the cloud and server, are important steps to ensure this is working.

Another issue I ran into when working with this, was my port 80 only computer was not open for traffic that was seeking to consume the local service I had created. Once port 80 was opened, I had no more issues connecting to the service running on a local IIS deployment within my network from the cloud. Note, this service was not made available outside of my network, so Azure Connect provides this connectivity.

Everyday we hit challenges in new technologies, and the Cloud environment is definitely a challenging environment. There are a lot of things that can (and probably will) fail along the way. The only way to make it through is continuing even when you hit the wall. Eventually you will get through!

I have been working on several projects lately that utilize the cloud for many different things, including service, session, database and client hosting. The ultimate goal is to have the flexibility to host solutions in the cloud in a fully available manner that is flexible to allocate more resources when needed. I have run into a case where I would like to pass more information into WCF functions that allows me to confirm caller identity, and manage some other things behind the scenes, without passing this information as parameters.

My original thinking was to somehow extend the functionality of WCF services itself to allow for adding parameters that would be set at initial session start.

An example being:


Code Snippet
  1. Service1Client myService = new Service1Client();
  2. myService.itemToPass = "Hello World!";


I didn’t really find anything that could meet my needs! It amazes me that there wasn’t more information out there regarding this type of data passing, since it seems a lot more reasonable to include such a offering, instead of passing the same value to all methods within a service. So, I asked on a couple forums and one post suggested possibly using a static variable for this, so I finally found a solution for it!

So on the service level I declared a private static variable for storing the value globally to be available for other methods within the service. I also created a setter method to enable, on setting up a connection, the availability to set this variable.

Code Snippet
  1. public class Service1 : IService1 {
  2. private static int customVar { get; set; }
  3. public void setCustomVar(int CustomVar) {
  4.     customVar = CustomVar;
  5. }
  6. // …


You ultimately could pass multiple things through a service, but remember to limit this to variables that MUST be provided globally. So from here, it is just a matter of (on the client) connecting to the service and setting this variable.


Code Snippet
  1. Service1Client myService = new Service1Client();
  2. myService.setCustomVar(12345);


Now that we have that variable set, it can be utilized for the lifetime of the connection to the service. This is very useful for those variables you want to have available globally, but don’t want to insert this parameter into every call!

When a developer goes to produce a product, there are always issues that need to be resolved, testing that needs to take place, and a lot that goes into a project. Game development is a tough industry, so much resting on an individual ‘liking’ a product, and willing to put time into it. Just like any other entertainment, the more people love it, the more it will make. Star Wars, in itself, wasn’t much, but with a huge fan following, it became one of the most epic industry changers, even when technology was so young. What movie has proven the same type of fan following today, with the same fan base? Not many. The only one that truly comes to my mind is Lord of the Rings… but I don’t see an aisle of LOTR toys at my local toy store.

Anyways, my main reason for writing this article up was to discuss game wait times, expectations, and the like. There has been an epic wait for some games… Duke Nukem, Starcraft 2, Diablo 3, and many more, but those are the one’s I have been waiting for. Starcraft 2 released late last year (July, 2010) and although I was initially interested in the game, I quickly lost interest for many reasons. The story truly didn’t meet my expectations.

The first Starcraft & Broodwar contained tons of single player levels. Multiplayer seemed to be more hashed together than previously, leaving me desiring the older experience of simple gameplay. The sad thing is, after all the time waiting for Starcraft 2, the disappointment that set in. Starcraft 2 has already dropped 10-20 dollars in list price, and I have this bad feeling it will drop from the shelves before Starcraft 1 & Broodwar. I’m sure blizzard will then create a battle chest and drop SC2 in it, just to get rid of the game.

The most disappointing thing about this, is I am currently also holding my breath for Diablo 3, also made by Blizzard. The longer we wait as a community, the more we will expect from the game. As a developer, I understand the need for polishing a product, but polishing too long can create enormous wait times (as we have seen) and unrealistic expectations.

Hopefully Blizzard can get back to what it once was (before EA purchased it), and create a great, fun experience for gamers, without unrealistic wait times.

With that, I’m happy to present a great video about the release of Duke Nukem.


I work as a .NET developer for JensonUSA bicycles in Riverside California. I currently have just completed my first large development project (a entire receiving management system that handles receiving, cart allocation, put-away, and tracking along the way, on handheld devices), and am working on several other projects currently.

I also am a full time student, at Kaplan University, finishing up my Software Development Degree, and partial owner of a new Social Network startup, that hopes to launch in about a year.

I hope to use this blog to log tips, tricks, suggestions and perspectives along the development process.

Current technologies used:



ASP.NET Webforms

WPF Services

Windows Azure

SQL Server