Visual Studio 2010 Document Previews

If you have used Visual Studio 2008 or the Visual Studio 2010 Betas you should remember a distinct feature. When you moved through documents inside the IDE with CTRL + TAB you would get a nice little preview of the documents you were flipping through. If you have used the latest version of Visual Studio 2010, you may have noticed that this feature has been pulled. Well you can restore it with a simple registry change!

If you use CTRL + TAB in Visual Studio 2010 today, you’ll get a dialog that looks something like the following:

VS2010 - No Preview

So in order to restore the thumbnail previews we will run this simple command from an elevated command prompt in Windows:

reg ADD HKCU\Software\Microsoft\VisualStudio\10.0\General /v ShowThumbnailsOnNavigation /t REG_DWORD /d 1

Now all you have to do is start up or restart Visual Studio 2010, get your documents open again, and then try to cycle through them. You should get a much more useful display like so:

VS2010 - Preview

Isn’t that fantastic?

Read More
Simple C# Threading and Thread Safety

A few days ago I compared and contrasted Asynchronous and Parallel Programming. Today I would like to walk you through a very simple threading example in C#, and why the concept of “thread safety” is important to know before you start writing parallel applications.

Since we already know what exactly parallel programming is, and how it is different from asynchronous calls we can go ahead and drill down to some more aspects of parallel programming. So here is an application that will start a new thread from the main, and both threads will call the same method:

using System;
using System.Threading;

class Threading101
{
    static bool fire;

    static void Main()
    {
        new Thread(FireMethod).Start();  // Call FireMethod() on new thread
        FireMethod();                    // Call FireMethod() on main thread
    }

    static void FireMethod()
    {
        if (!fire)
        {
            Console.WriteLine("Method Fired");
            fire = true;
        }
    }
}

Now at first glance you might say that we will only see “Method Fired” shown on screen, however when we run the program we will see this output:

Program Output

Well we have obviously called the method on both threads, but we got an undesired output! This example shows the clear issues you will encounter when working with threads in C#. This method is defined as thread unsafe and does not work correctly (in its current state) when used in threading applications.

So what can we do about this? Well we need some method of preventing a thread from entering a method when another thread is entering a critical part of the method. So we just have to update our code to account for some type of locking functionality and then re-work our application:

using System;
using System.Threading;

class Threading101
{
    static bool fire;

    static readonly object locker = new object();

    static void Main()
    {
        new Thread(FireMethod).Start();  // Call FireMethod() on new thread
        FireMethod();                    // Call FireMethod() on main thread
    }

    static void FireMethod()
    {
        // Use Monitor.TryEnter to attempt an exclusive lock.
        // Returns true if lock is gained
        if (Monitor.TryEnter(locker))
        {
            lock (locker)
            {
                if (!fire)
                {
                    Console.WriteLine("Method Fired");
                    fire = true;
                }
            }
        }
        else
        {
            Console.WriteLine("Method is in use by another thread!");
        }
    }
}

Running the code above now produces a better threading result:

Program Output

Now that looks better! We made sure that only a single thread could enter a critical section of the code, and prevent other threads from stepping in. We first use Monitor.TryEnter(locker) to check for a lock, and if there is no lock we step in and run our code. If there is already a lock, we have added the logic to print that result to screen.

Pretty simple huh? So this little app spawns an extra thread, and both threads fire the same method. However, the variable is only changed once, and the message from the method is only printed once. The first snippet is a perfect example of a method that is not thread safe, and the second one is a great way to protect that same method.

Read More
C# Preprocessor Directives

If you have ever worked with an application that bounces from your workstation, to QA, then to production the odds are high you have dealt with C# preprocessor directives. While C# does not have a preprocessing engine, these directives are treated as such. They have been named as such to be consistent with C and C++ for familiarity. Directives can be used to build classes based on the environment they will be deployed in, to grouping chunks of your source code together for collapsing inside the Visual Studio code editor. This article will go over each C# preprocessor directive.

C# has the following directives, all of which will be covered in this article:

  • #if
  • #else
  • #elif
  • #endif
  • #define
  • #undef
  • #warning
  • #error
  • #line
  • #region
  • #endregion

Let’s start with #define and #undef. These directives are used to define and undefine symbols that evaluate to true (if using #define) when used in other logical directives. As you could imagine, #undef will undefine a given symbol (such that it yields false).

// Set Debug Mode
#define DEBUG_MODE
// Kill SQL Logger
#undef SQL_LOG

With those two directives down, we can move on to #if, #else, #elif, and #endif directives. These directives let you step into or over chunks of code depending on the condition that is checked. As you can imagine, they behave like if, else if, and else statements in C#. The #endif directive must be used to finish off any statement or statements starting with the #if directive. You may use the ==, !=, &&, || operators to check various conditions. You can also group symbols and operators by using parentheses.

#define DEBUG_MODE
#undef SQL_LOG
using System;

public class SomeClass
{
    public static void Main()
    {
        #if (DEBUG_MODE && !SQL_LOG)
            Console.WriteLine("DEBUG_MODE is defined!");
        #elif (!DEBUG && SQL_LOG)
            Console.WriteLine("SQL_LOG is defined!");
        #elif (DEBUG && SQL_LOG)
            Console.WriteLine("DEBUG_MODE and SQL_LOG are defined!");
        #else
            Console.WriteLine("No symbols defined");
        #endif
    }
}

/*
Prints to screen:
DEBUG_MODE is defined!
*/

In just these two examples I have already covered 6 of the 11 possible C# preprocessor directives. The next few will help you add messages to your compiler output.

Now let’s cover the #warning and #error directives. Both of these directives will throw Warnings or Errors respectively when you compile your application in Visual Studio. For example, you may want to throw a warning that you left debug mode on so you don’t accidentally deploy your application to production while it is running in a debug state:

#define DEBUG_MODE
using System;

public class SomeClass
{
    public static void Main()
    {
        #if DEBUG_MODE
        #warning DEBUG_MODE is defined
        #endif
    }
}

…and of course #error will cause an error to be displayed in the compiler output:

#define DEBUG_MODE
using System;

public class SomeClass
{
    public static void Main()
    {
        #if DEBUG_MODE
        #error DEBUG_MODE is defined
        #endif
    }
}

The #line directive is more strange than the other preprocessor directives. Specifically, #line allows you to modify the compiler’s line number and optionally change the file name that is used for warning and error outputs. The syntax is as follows:

#line [ number ["file_name"] | hidden | default ]

Hidden will remove successive lines from the debugger until another #line directive is hit. Usually the #line directive is used in automated build process or code creators. But for an example if I use this chunk of code:

using system

class SomeClass
{
    static void Main()
    {
        #line 208
        int i;
        #line default
        char c;
    }
}

Will produce this error output inside Visual Studio:

C:\Path\To\File.cs(208,13): warning CS0168: The variable 'i' is declared but never used
C:\Path\To\File.cs(10,13): warning CS0168: The variable 'c' is declared but never used

Finally, we have the #region and #endregion. Every #region block must end with a #endregion block. When you define a block it allows you to expand and collapse code inside Visual Studio for easier reading and reference. There are some important points to note though: A #region block cannot overlap an #if block and vice versa . You can nest an #if block inside a #region block or a #region block inside an #if block though. So for example:

using System;

class MainBox
{
    static void Main(string[] args)
    {
        #region Secrets
        /*
         * ...here be dragons!
         */
        #endregion
    }
}

I can expand and collapse the section inside Visual Studio as pictured:

Collapsing region

…and that is all the possible C# preprocessor directives you can use! I love the #region one, since it allows you to lump your code together for easier reading.

Read More
Asynchronous Versus Parallel Programming

The last decade has brought about the age of multi-core processors to many homes and businesses around the globe. In fact, you would be more hard-pressed to find a computer with no multi-core (either physical, or virtual) support for sale on the market today. Software Engineers and Architects have already started designing and developing applications that use multiple cores. This leads to extended use of Asynchronous and Parallel programming patterns and techniques.

Before we begin it would help to review a key difference in Asynchronous and Parallel programming. The two perform similar tasks and functions in most modern languages, but they have conceptual differences.

Asynchronous calls are used to prevent “blocking” within an application. For instance, if you need to run a query against a database or pull a file from a local disk you will want to use an asynchronous call. This call will spin-off in an already existing thread (such as an I/O thread) and do its task when it can. Asynchronous calls can occur on the same machine or be used on another machine somewhere else (such as a computer in the LAN or a webserver exposing an API on the internet). This is used to keep the user interface from appearing to “freeze” or act unresponsively.

In parallel programming you still break up work or tasks, but the key differences is that you spin up new threads for each chunk of work and each thread can reach a common variable pool. In most cases parallel programming only takes place on the local machine, and is never sent out to other computers. Again, this is because each call to a parallel task spins up a brand new thread for execution. Parallel programming can also be used to keep an interface snappy and not feel “frozen” when running a challenging task on the CPU.

So you might ask yourself “Well these sound like the same deal!” In reality they are not by any means. With an asynchronous call you have no control over threads or a thread pool (which is a collection of threads) and are dependent on the system to handle the requests. With parallel programming you have much more control over the tasks chunks, and can even create a number of threads to be handled by a given number of cores in a processor. However each call to spin up or tear down a thread is very system intensive so extra care must be taken into account when creating your programming.

Imagine this use case: I have an array of 1,000,000 int values. I have requested that you, the programmer, make an addition to each of these people objects to contain an internal id equal to the object’s array index. I also tell you about how the latest buzzword on the street is “multi-core processing” so I want to see patterns on that used for this assignment. Assuming you have already defined the original “person” class and wrote a “NewPerson” class with the added dimension, which pattern (asynchronous or parallel) would be preferred to break the work up and why?

The correct answer of course would be the parallel pattern. Since each object is not dependent on another object from somewhere else (from something like a remote API) we can split the million objects into smaller chunks and perform the object copy and addition of the new parameter. We can then break send those chunks to different processors to conduct the execution of our code. Our code can even be designed to account for n processors in any computer, and evenly spread the work across CPU cores.

Now here at Mercer I am working on a .NET web product. Our tech leads and developers have created a “Commons” library that contains factories and objects that are useful to all the sub-projects that exist in this very large .NET product. Exposed services or factories are hosted via IIS and are accessible by other projects in the product by simply referring to the “Commons” namespace. Basically the “Commons” library prevents all developers from re-inventing the wheel if they need things such as a log writer, methods to extract objects from a MSSQL database, or even methods for interoperability between projects.

When it comes to the “Commons” library we are using Asynchronous calls between the server and client. We do this since a user could potentially hit one server in the cluster, then make another request that is picked up by a separate server in the cluster. It would not be helpful if we spun up a processing thread on Server A, only for the client to hit Server B (which then would have to spin up its own thread) if the cluster load balancer redirects them to Server B. Since our services are built to be asynchronous calls, all the client end has to do is pass in some session information to the server and the server can pull up the requested objects or data. If we were to use parallel processing in regards to the .NET pattern, we would be creating a ton of overhead with the constant setup and tear-down of threads within the webserver. There is also a chance the client might be redirected to another server completely by the forward facing load balancer. For our “Commons” it makes much, much more sense to just let the operating system handle sending off and receiving asynchronous calls.

So this should have served as a basic compare and contrast of asynchronous and parallel programming. If you remember anything, remember this: While both patterns are used to prevent blocking within, say a User Interface, keep in mind that an asynchronous calls will use threads already in use by the system and parallel programming requires the developer to break the work up, spinup, and teardown threads needed.

Read More
Follow Up on IIS Services, 504s, and Fiddler

The other day I posted an article discussing my issue tracking down a bug in a ClickOnce application. I had noted that once I made a change to maxRequestLength in my web.config file the issue went away. Well, that change was not the real solution.

While the maxRequestLength does help the application pull down the query at times, it was not what corrected the original problem! This morning as I was continuing to work on the application, I had gotten the same 504 error (which by the way you can read the original article here) I thought I had corrected. Since IIS was not sending me any vital information other than the 504 timeout I had to find another way to catch the bug.

I had to add another entry into my web.config file to catch any possible errors my services were throwing out that were not being logged in IIS or Fiddler. Sure enough, I was able to log a trace that showed me the root of the 504 problem. Before I tell you what the problem was with my program, I would like to take you through build a tracer similar to the one I used.

Let’s start building our tracer. First we drop into a system.diagnostics block. The system.diagnostics block is in charge of catching various trace messages and outputting them somewhere of our choosing, so it will contain child tags that will control how it operates. For reference, you can read all about the system.diagnostics namespace on MSDN.

<system.diagnostics>
    <!--
    (Here be dragons)
    -->
</system.diagnostics>

Now we will add another line to our XML snippet. The trace tag, with the property of autoflush=”true” is a class in charge of tracing through methods in your code. Autoflush will define if flush is called after every write (if it is set to true; see the link for the full MSDN specifications). So now our XML looks as so:

<system.diagnostics>
    <trace autoflush="true" />
    <!--
    (Here be dragons)
    -->
</system.diagnostics>

So now we step down into our sources, and define a source for diagnostics. In this case our source will have a name of “System.ServiceModel”. This is defined on MSDN as:

System.ServiceModel: Logs all stages of WCF processing, whenever configuration is read, a message is processed in transport, security processing, a message is dispatched in user code, and so on.

Source: https://msdn.microsoft.com/en-us/library/ms733025.aspx

We then define our switchValue, which will capture any messages matching the values. Possible values are:

  • Off
  • Critical
  • Error
  • Warning
  • Information
  • Verbose
  • ActivityTracing
  • All
  • …more details on each level can be found on MSDN

In this case we want to catch message that are “Information” or “ActivityTracing” operations. The propagateActivity setting determines whether the activity should be followed to other endpoints that take part in the exchange. By setting this to true, you can take trace files generated by any two endpoints and see how a set of traces on one end flowed to a set of traces on another end. Our XML now looks like this:

<system.diagnostics>
    <trace autoflush="true" />
    <sources>
        <source name="System.ServiceModel" switchValue="Information, ActivityTracing" propagateActivity="true">
            <!--
            (Here be dragons)
            -->
        </source>
    </sources>
</system.diagnostics>

We are almost done defining the XML for our trace operations. Now we just need to add the actual listener that will catch anything that is sent from the source we defined. We will add an XmlWriterTraceListener to save the information back to something. We will be naming it “std”, defining the listener as System.Diagnostics.XmlWriterTraceListener, and we instruct it to save a file to the website root as “Server.e2e”. Our XML for tracing is now complete!

<system.diagnostics>
    <trace autoflush="true" />
    <sources>
        <source name="System.ServiceModel" switchValue="Information, ActivityTracing" propagateActivity="true">
            <listeners>
                <add name="sdt" type="System.Diagnostics.XmlWriterTraceListener" initializeData="Server.e2e"/>
            </listeners>
        </source>
    </sources>
</system.diagnostics>

We then go back to the application (after recycling the App Pool in IIS that hosts the application at fault), launch the ClickOnce program, and trigger the error to occur. When the error finally does occur all you have to do is pull down the file the listener wrote to (in our case Server.e2e) and open it up with the Microsoft Service Trace Viewer which is located in All Programs -> Visual Studio 2010 -> Microsoft Windows SDK Tools. You should see any errors that are occurring, and hopefully the one that you need to know to correct the problem!

After I had my output opened in the Service Trace Viewer I found the original problem. In my case the problem was listed as:

EXCEPTION TYPE:
System.Data.OracleClient.OracleException, System.Data.OracleClient,
  Version=4.0.0.0, Culture=neutral, PublicKeyToken=xxxxx

MESSAGE:
ORA-12154: TNS:could not resolve the connect identifier specified

My application was dying off because IIS was not able to return a result to it. The reason IIS was unable to return an invalid result was caused by an Oracle database not returning any data. In our QA environment at Mercer, we have a clustered Oracle database configuration. When IIS made a query to the Oracle cluster, it had a 50/50 chance of hitting a server that did not want to respond. This is why the application worked some of the time and then would crash other times.

The quick fix? Tell the application to only hit the good Oracle server.

The long-term, correct fix to make? Obviously get the Oracle cluster in QA corrected. Then rebuild the service in IIS to understand this type of error, and relay back a better message to the client. The client ClickOnce application will then need to understand this new message, and throw up an error message instead of locking up.

Hopefully this article is of use to understanding how to trace events in your applications and/or provide another viewpoint to use when you have an intermittent issue like I ran into.

Read More
IIS Services, 504s, and Fiddler

I have been tracking a random issue in one of our projects here at Mercer. It is a simple ClickOnce application, with a handful of hosted services through an IIS website. When I worked on the tool in my local development environment everything worked fine. When I deployed the tool to QA for testing it completely broke at a single point in the application for our QA team in India. The tool worked fine here in Louisville, with the exception of this morning where I was able to reproduce the problem.

ATTENTION! The actual problem was found later on and is detailed here.

With the problem triggering locally, I started up Fiddler on my program. In case you have never heard of Fiddler it is a wonderful tool to track HTTP requests entering and exiting your computer. From the official Fiddler website:

Fiddler is a Web Debugging Proxy which logs all HTTP(S) traffic between your computer and the Internet. Fiddler allows you to inspect all HTTP(S) traffic, set breakpoints, and “fiddle” with incoming or outgoing data. Fiddler includes a powerful event-based scripting subsystem, and can be extended using any .NET language.

Fiddler is freeware and can debug traffic from virtually any application, including Internet Explorer, Mozilla Firefox, Opera, and thousands more.

Source: https://www.fiddler2.com/fiddler2/

When I performed the action that caused the application to lock up (in this case hit a service on IIS to pull down a list from a database), Fiddler treated me with this error message:

HTTP/1.1 504 Fiddler - Receive Failure
Content-Type: text/html
Connection: close
Timestamp: XX:XX:XX.XXX

ReadResponse() failed: The server did not return a response for this request.

This had me lost, I was not sure why the server would be kicking back a 504 timeout error. I was able to get access to the service at its endpoint URL and I was able to use svcutil.exe to build generated stubs for C# off it. The logs in IIS also did not give me any further information. What I did know is that when a request for information came from the ClickOnce application, it would be unable to pull in any results right after Fiddler logged a 504 from the service.

So I had to do some investigation into the matter server-side. After all, it was during the client request that the timeout occurs, so it was the best place to start. I tried re-deploying the application to the QA cluster multiple times, digging through IIS configuration settings, and double checking my code changes until I figured out the magic bit I was missing.

The request was being cutoff by IIS because it was too large for whatever reason. So I simply added this line to my Web.config for the IIS project in Visual Studio:

<system.web>
    <!-- ... -->

    <httpRuntime maxRequestLength="16384"/>

    <!-- ... -->
</system.web>

The **maxRequestLength **attribute of **httpRuntime **controls how much information IIS will accept before throwing an error. In this case it was trying to build up our desired database query, but was exceeding the limit. For testing purposes, I have increased the limit to 16384KB or 16MB. This allowed the ClickOnce application to receiving all the information it wanted, and our QA team in India was able to run the application as expected.

There are some things to keep in mind with maxRequestLength. It is an attribute in ASP websites to prevent DOS attacks on the machine. It prevents a malicious user from making a huge HTTP request in an attempt to cause the server to lock up or crash from a request of that size. In this case it was cutting my application off too soon before it could load up any results from the request.

Read More
Specialized C# Operators

In a previous post I went over some random C# operators. This article is a follow-up to that one, covering some more advanced C# operators and techniques. Specifically, the ?: operator, the ~ operator, |= operator, and the ^= operator.

We will start off with the conditional operator ?:. This operator is used to simplify an expression to test for a boolean value, and execute specific code that matches the value. Let’s start off with a code snippet that does not use the ?: operator.

int TestValue = 6;
int result = -1;

if(TestValue <= 5)
    result = 0;
else
    result = 1;

Instead we can use ?: to simplify that. We can refactor that into a new code snippet:

int TestValue = 6;

int result = TestValue <= 5 ? 0 : 1;

In plain English the ?: operator reads as:

condition ? CodeIfTrue : CodeIfFalse;

Now we will move onto the ~ operator. This operator is used for a NOT bitwise operation, or better stated from MSDN:

The ~ operator performs a bitwise complement operation on its operand. Bitwise complement operators are predefined for int, uint, long, and ulong.

Source: ~ Operator

A NOT bitwise operation is also known as a one’s compliment operation. It takes the binary of a variable or object, and flips each bit to a 0 if it was a 1, and to a 1 if it was a 0. The code below demonstrates the operator:

byte OriginalValue = 208;
byte complement = (byte) ~OriginalValue;

string OriginalString = Convert.ToString(OriginalValue, 2);
string ComplementString = Convert.ToString(complement, 2);

Console.WriteLine(OriginalString.PadLeft(8, '0'));
Console.WriteLine(ComplementString.PadLeft(8, '0'));

/*
Prints to screen the following:
11010000
00101111
*/

As you can see, all of the bits are flipped to their opposite partner. This is could be useful if you are using C# to interact with a piece of hardware, and need to manipulate the bits of data that is exchanged with said hardware.

Next we have the |= operator. This operator performs a bitwise OR operation against a variable. So when compare two values, if one or both are true, then the result must be true. The following C# code shows an example of the OR operation:

byte Value1 = 245;
byte Value2 = 113;

string Value1String = Convert.ToString(Value1, 2);
string Value2String = Convert.ToString(Value2, 2);

Console.WriteLine(Value1String.PadLeft(8, '0'));
Console.WriteLine(Value2String.PadLeft(8, '0'));

Value1 |= Value2;
string ResultString = Convert.ToString(Value1, 2);
Console.WriteLine("OR:");
Console.WriteLine(ResultString.PadLeft(8, '0'));

/*
Prints to screen the following:
11110101
01110001
OR:
11110101
*/

Once again, this is useful for specific hardware signaling or situations where specific binary operations are needed.

Finally we have the ^= operator. This operator is another bitwise operator, specifically the exclusive OR (XOR). An XOR operation returns a true result if exactly one operand has a true value. If both compared values are true or both are false, XOR will yield a false result. This code snippet demonstrates how it works in C#:

byte Value1 = 245;
byte Value2 = 113;

string Value1String = Convert.ToString(Value1, 2);
string Value2String = Convert.ToString(Value2, 2);

Console.WriteLine(Value1String.PadLeft(8, '0'));
Console.WriteLine(Value2String.PadLeft(8, '0'));

Value1 ^= Value2;
string ResultString = Convert.ToString(Value1, 2);
Console.WriteLine("XOR:");
Console.WriteLine(ResultString.PadLeft(8, '0'));

/*
Prints to screen the following:
11110101
01110001
XOR:
10000100
*/

Again, this is useful for things like hardware communication or data stream manipulation.

There are a ton more bitwise operations that C# can perform, all of which are detailed on MSDN

Read More
IronPython and C#

So the other day I wrote about dynamic types in C#. I covered a few use cases from COM interaction to working with other languages. Well, today I have put together an example for you that will load a Python file into C#, through IronPython.

Before we can work with IronPython in C#, we need to setup our environment. Here is a quick overview of the steps we will take before we work with IronPython:

  1. Install the latest stable release of IronPython
  2. Create a new C# Console Application in Visual Studio 2010
  3. Add required references for IronPython
  4. …then write the code!

The first step in working with IronPython in Visual Studio 2010 is to actually install IronPython. You’ll need to visit IronPython.net to grab the latest version of IronPython. Just install the latest stable release. For reference, I installed IronPython version 2.6.1 when I wrote this article. Just install all the recommended components. After that is done you can go ahead and startup Visual Studio.

After Visual Studio has started up, you’ll need to start a new C# Console Application project. After you have created that we are going to need to add references (Right-Click on References in the Solution Explorer > “Add Reference…”) to this project. Assuming you installed IronPython in the default directory you will find all the needed references in “C:\Program Files\IronPython 2.6 for .NET 4.0” on 32-bit systems and “C:\Program Files (x86)\IronPython 2.6 for .NET 4.0” on 64-bit systems. We will be adding the following references:

  • IronPython
  • IronPython.Modules
  • Microsoft.Dynamic
  • Microsoft.Scripting

Now we can actually get to the code creation! First we are going to make a new text file in the root of our project, and we will call it PythonFunctions.py. When that has been created you’ll need to update the properties on that file, specifically set Copy to Output Directory to Copy always. Now we will fill out our Python file with some Python functions:

def hello(name):
	print "Hello " + name + "! Welcome to IronPython!"
	return

def add(x, y):
	print "%i + %i = %i" % (x, y, (x + y))
	return

def multiply(x, y):
	print "%i * %i = %i" % (x, y, (x * y))
	return

This file is describing three basic functions in Python: A function that says “Hello {Name}! Welcome to IronPython!” and two math functions. All of these functions will print to our console in C# using the Python print command.

Now that we have our python prepared, we will rename the generic Program.cs file to IronPythonMain.cs. As always allow Visual Studio to update the references in the file when prompted. Our C# file will follow this workflow:

  • Create the IronPython Runtime
  • Enter a try/catch block to catch any exceptions
  • Attempt to load the Python file
  • Run the Python Commands
  • Exit the Program

So here is the C# that will run our IronPython program:

using IronPython.Hosting;
using IronPython.Runtime;
using Microsoft.Scripting.Hosting;
using System;

namespace IntroIronPython
{
    class IronPythonMain
    {
        static void Main(string[] args)
        {
            // Create a new ScriptRuntime for IronPython
            Console.WriteLine("Loading IronPython Runtime...");
            ScriptRuntime python = Python.CreateRuntime();

            try
            {
                // Attempt to load the python file
                Console.WriteLine("Loading Python File...");
                // Create a Dynamic Type for our Python File
                dynamic pyfile = python.UseFile("PythonFunctions.py");
                Console.WriteLine("Python File Loaded!");

                Console.WriteLine("Running Python Commands...\n");

                /**
                 * OK, now this is where the dynamic type comes in handy!
                 * We will use the dynamic type to execute our Python methods!
                 * Since the compiler cannot understand what the python methods
                 * are, the issue has to be dealt with at runtime. This is where
                 * we have to use a dynamic type.
                 */

                // Call the hello(name) function
                pyfile.hello("Urda");
                // Call the add(x, y) function
                pyfile.add(5, 17);
                // Call the multiply(x , y) function
                pyfile.multiply(5, 10);
            }
            catch (Exception err)
            {
                // Catch any errors on loading and quit.
                Console.WriteLine("Exception caught:\n " + err);
                Environment.Exit(1);
            }
            finally
            {
                Console.WriteLine("\n...Done!\n");
            }
        }
    }
}

When the program is run, we are greeted with this output:

IronPython Output

Again take note, that it is the print function from the Python file that is driving the console output in this application. All C# is doing is opening up the runtime, loading the Python file, and just calling the Python methods we defined.

Through the power of IronPython and C# dynamic types we are able to pull Python code and functions into C# for use. The dynamic type will figure out what it needs to be at runtime from the Python file. It will also locate and invoke the Python functions we call on it through the IronPython runtime. All of this can be conducted on Python code that you may already have, but have not transitioned it to the C# language. This entire project is a perfect example of using C# and Python together through the strange dynamic type in C#.

Read More
Dynamic Types in C#

When C# 4.0 was released, it added a new type for variables called dynamic. The dynamic type is a static type, but it is an object that bypasses static type checking. Now if your head has just exploded from reading that last sentence I apologize. When you compile an application that contains any dynamic types, those dynamic objects are assumed to support any operation that may be ran against them. This allows a developer to not worry about where a method is coming from be it XML, DOM, or other dynamic languages like IronPython. However, if at runtime a method or command does not exist errors will be thrown at run-time instead.

All of these basic concepts come together to form the concept of a C# dynamic type. The dynamic type in C# is a strange, new concept that has some interesting use cases. Those use cases usually apply to interacting with other languages or documents.

Let’s say I build a simple class that describes a person object. I will also go ahead and create a main class, build a person object with the dynamic type, and print one line to screen.

using System;

namespace IntroDynamicTypes
{
    class Person
    {
        public Person(string n)
        {
            this.Name = n;
        }

        public string Name { get; set; }
    }

    class DynamicTypesProgram
    {
        static void Main(string[] args)
        {
            dynamic DynamicPerson = new Person("Urda");
            Console.WriteLine("Person Created, Name: " +
                              DynamicPerson.Name);
            // Prints "Person Created, Name: Urda"
        }
    }
}

Now you may notice as you key this into Visual Studio 2010 you will not have your normal IntelliSense to guide you. You will be prompted with this notice:

No IntelliSense

Since we have defined this person object as dynamic, we can use any method we want with it! The compiler will not check for anything or stop you from building an application with objects using undefined methods. This is because a dynamic class can call these methods at run time, with the expectation that the method definitions and code will exist when the program is ran. In fact we can even add some more code into our main like so…

static void Main(String[] args)
{
    dynamic DynamicPerson = new Person("Urda");
    Console.WriteLine("Person Created, Name: " +
                      DynamicPerson.Name);
    // Prints "Person Created, Name: Urda"

    // This will throw an error only at runtime,
    // *not* at compile time!
    DynamicPerson.SomeMagicFunction();
}

At this point you’ll notice we have added a method called SomeMagicFunction that does not exist in the class, but Visual Studio 2010 still lets us compile the application. It is only at run time that this application will throw an error when it attempts to make a call to SomeMagicFunction. But if the function was made available through some form of interop, you would be able to execute that function against the object.

So dynamic types allows C# to play nice with other languages such as IronPython, HTML DOMs, COM API, or somewhere else in a program. Think of the dynamic type as a way to bridge the gap between strongly typed components such as C# and weakly type components such as IronPython, COM, or other DOM objects.

Read More
Understanding Path Limits in TFS

Team Foundation Server (TFS) is bound to a some limitations that can potentially break your Visual Studio project. One of these limitations is the character count limit in a file path. If you overshoot this limit you will run into issue when adding new files to TFS or attempting to compile your project in Visual Studio. Here is a quick overview explaining why TFS behaves like this and what you can do about it.

When attempting to add a file to TFS or Visual Studio for compilation, you will be prompted with this error:

The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters.

So why is this an issue? Surely modern operating systems should not be bound to these kinds of restrictions. After all, once you get past C:\Users[Username]\Documents\Projects... you have already eaten up 40 or so characters! This is an issue since TFS is apparently making non-unicode calls to create paths. When a program makes a non-unicode call, it will be limited to 260 characters for the path. This is not an issue with other programs that make a unicode call through the Windows API to create paths, as their limit is actually 32,767 characters.

So what can you do about this? The best thing to do is to shorten your base path. You can easily do this by moving your project from, say ‘My Documents’, to something along the lines of C:\Projects or C:\TFS

If you are really interested in the full technical details of this issue, you can visit MSDN for more information.

Read More