ADO.NET Connections, Commands, Parameters

I wanted to write this because I’ve seen some code written in the last few years and while not all bad, some even working as intended but lacking a certain clean look and occasionally a bit inefficient.

As an example to help me explain it I wrote a little console application, you can see the code at the end of this post.

The example is using a localdb version of the MS-SQL Server where a Test database was created.
Table1 is a table on Test with two fields: Id – an autoincremented primary key and Color – an nchar(10) field.

The example inserts two records in the table, one by calling Method1() and the other one by calling Method2().

Method1() is the method that contains the code that I mentioned above is not so clean and so on.
Method2() is the cleaner alternative which I hope you’ll adopt from now on.

The comments in the code should be sufficient to explain why the Method2() code can be written that way while keeping the functionality of the code identical with the one in Method1() and looking much cleaner.

I will however summarize the improvements the new code brings:

  • Instances of classes like SqlConnection and SqlCommand are disposable (implement IDisposable) so we can use the using block/scope which upon the exit of the block automatically calls Dispose() on the variables defined at the using declaration line. Of course what the implementation of the Dispose() method is doing in a particular class is different from one to another. That is where the MSDN page for the respective class can shed some light and if that’s not enough than you will have to use a tool like Reflector or JustDecompile (from Telerik) to take a peek at how Dispose is implemented for a particular class. For SqlConnection the Dispose() method will close the connection among other things so you don’t have to bother anymore with that if you tied your SqlConnection instance in a using block.
    • Above I said “the variables defined at the using declaration line” and it is not a mistake. The using syntax allows more than one variable tied to the using statement (separated by commas) as long as they are of the same type. So basically you can declare and instantiate 2 different connections with one using. However, when declaring multiple variables we have to specify the SqlConnection type explicitly as implicit-typed variables have to be declared individually (e.g.: var m =1, n=2 is not allowed)
  • There are a few cases when you can avoid having to declare SqlParameter variables just because you need to reference them later. The Add() and AddWithValue() methods return the SqlParameter they are creating so at least one more property can be set. Often, that’s enough to set the Value when Add() was used or set the SqlDbType when AddWithValue() was used.
  • It is a good idea to always specify the SqlDbType for a parameter because if there’s a type mismatch with the value set for parameter it may be caught as an InvalidCastException before the SQL statement is executed.
  • The same goes for the size of the parameter which can be specified as the third argument when calling the Add() method. It is important because if we would use Method1() to insert a string longer than 10 characters in the [Color] field SQL will raise a truncation error that comes back to our application as a SqlException. However, if we use Method2 which specifies the size of the field (10), the .Net code will make sure the value of the parameter is truncated before being sent to the SQL Server thus ensuring the success of the execution of that SQL command.
  • You can always use the name of a parameter as an index value to reference the corresponding SqlParameter instance in the SqlCommand.Parameters collection; it may be a good way of avoiding to have to declare a new variable just to store a reference that will be reused once or twice.
using System;
using System.Data;
using System.Data.SqlClient;
namespace ConsoleApplication2
{
    class Program
    {
        static void Main(string[] args)
        {
            Method1();
            Method2();
        }
        private static void Method1()
        {
            SqlConnection conn = new SqlConnection(@"Data Source=(localdb)\ProjectsV12;Initial Catalog=Test;Integrated Security=True;Connect Timeout=30;");
            SqlCommand comm = new SqlCommand(@"INSERT INTO Table1 (color) VALUES (@color)", conn);
            try
            {
                SqlParameter pColor = comm.Parameters.Add("@color", SqlDbType.NChar);
                pColor.Value = "Red";

                // Sometimes we see AddWithValue being used but without specifying
                // the SqlDbType so they can avoid having to declare a new SqlParameter 
                // comm.Parameters.AddWithValue("@color", "Red");
                
                // other times we created a SqlParameter variable just because needed to be referenced later
                Console.WriteLine("Param: {0} = {1}", pColor.ParameterName, pColor.Value);

                conn.Open();

                if (conn.State == ConnectionState.Open)
                {
                    comm.ExecuteNonQuery();
                }
                // Don't need to check if connection is open or any other state, 
                // Close() can be called many times, no exception will be generated
                conn.Close(); 
            }
            catch (SqlException ex)
            {
                Console.WriteLine(ex.ToString());
                throw;
            }
            finally
            {
                comm.Dispose();
                //conn.Dispose(); Close and Dispose on a connection are functionally equivalent
            }
        }
        private static void Method2()
        {
            try
            {
                using (var conn = new SqlConnection(@"Data Source=(localdb)\ProjectsV12;Initial Catalog=Test;Integrated Security=True;Connect Timeout=30"))
                {
                    conn.Open();
                    if (conn.State != ConnectionState.Open) return;

                    using (var comm = new SqlCommand(@"INSERT INTO Table1 (color) VALUES (@color)", conn))
                    {
                        // both Add() or AddWithValue return the SqlParameter so we can set at least one other property in the same line
                        comm.Parameters.Add("@color", SqlDbType.NChar, 10).Value = "Blue";
                            
                        // we don't have to create a variable to reference a Parameter, we can do it by its index
                        Console.WriteLine("Param: {0} = {1}", comm.Parameters["@color"].ParameterName, comm.Parameters["@color"].Value);

                        comm.ExecuteNonQuery();
                    } // here the SqlCommand instance will be disposed automatically
                }// here the connection will be Disposed/Closed automatically
            }
            catch (SqlException e)
            {
                Console.WriteLine(e);
                throw;
            }
        }
    }
}

Debugging code called while NUnit testing.

This post is intended mostly as a reminder for myself.
This is probably a beaten path so I expect a lot of people already know how to do this if they need.

Let’s say you are executing your NUnit tests using the NUnit console.
Something is happening and you want to be able to stop at a breakpoint somewhere in the code that you are testing.
So, while you have your NUnit console executable up and running, in Visual Studio go to Tools\Attach to Process ….
This will open a dialog where you can select the process you want to attach to. This is : “nunit-agent.exe” and after you selected it click Attach. (Leave all the other setting settings to their default values: Transport=Default, Qualifier=<your_computer_name>, Attach to=Automatic)

We will assume the dialog was dismissed cleanly with no errors and you are back to the main VS window. At this point, if you have a breakpoint in your to be tested code, switch to NUnit console and just Run (or Run All) tests. Of course the portion of the tested code with a breakpoint in it has to be called eventually (directly or indirectly) from the testing code. If that is the case, the execution will now stop at that breakpoint allowing you to go step by step, investigate the values of variables and so on.

Hopping this helps,
Cheers

Unable to access computer B on a windows network from computer A.

\\myNAS is not accessible. You might not have permission to use this network resource. Contact the administrator of this server to find out if you have access permissions.Access is denied

I ran into this nightmare of a problem last Friday. Honestly I don’t know what happened, but all of the sudden, from the workstation I cannot access anymore my Synology NAS. I mean by name or fully qualified domain name. By IP address it was working fine but you don’t want that approach to resolve the problem unless you are desperate.

It took Saturday, Sunday and Monday on and off grinding at the issue to finally realize what the problem was. I have to mention that from other machines I could access the NAS just fine. More, I could access other hosts on my network from my workstation with no problems. This was strictly an issue between my station and my NAS.

After numerous Wireshark traces and struggling to discern which ones are the records in those traces that represent the failed attempts to access my NAS, I started to believe more and more that this record was the one pointing out the problem: tree connect andx response error status_access_denied

A simple search on google takes me to this blog entry. Now even though it is only similar to my problem it made me remember a Kerberos error that I have seen in the Wireshark traces, in a reply from my DC: KRB5 KRB Error: KRB5KDC_ERR_C_PRINCIPAL_UNKNOWN . I should mention that my NAS is part of a domain and it uses AD authentication.

Naturally as the next step, I went automatically searching for the request related to the above reply: KRB5 AS-REQ. So, I’m expanding the NetBIOS section of the request in Wireshark, to see what is the name of the principal that my DC does not recognize. I was happy to see this:  Client Name (Principal): admin Realm: MYDOMAIN. I know for sure I don’t have such a user in MYDOMAIN so where is this coming from and why is it trying to authenticate using this specific account?

Thanks to my memory who’s still in decent shape I remember while analysing the Event Logs, seeing a very isolated warning in the System event log that was happening during the Windows booting procedure. This is the warning’s message:

The password stored in Credential Manager is invalid. This might be caused by the user changing the password from this computer or a different computer. To resolve this error, open Credential Manager in Control Panel, and reenter the password for the credential MYDOMAIN\admin.

This particular error was a confirmation that somewhere, Windows is storing the wrong credentials that are to be used automatically when accesing my NAS.

What do you want more clearer than that? Kudos to whomever was thoghtfull enough to put that error message. I immediatelly opened Control Panel, typed Credential Manager in the Search box. Clicked on Credential Manager link that showed up. It took me two seconds to spot the entry for my NAS’ name in the Windows Credentials section. I deleted the entry, went back to Windows explorer and tried both  \\myNAS and \\myNAS.mydomain.local and they worked!

Cheers.

Exe, Dll assemblies and their config file applicationSettings group section.

The background

I am reading articles, forum posts about applicationSettings for almost a week now.

In almost every thread there is someone that appears to have correctly pointed out that when deployed, class libraries cannot have config files as executables do. I mean you can always have the file there ClassLibrary.dll.config, but it won’t make any difference if you just copied it from the output folder of the class library project into the folder where the application was deployed.

The application’s code will not be able to read any settings from that file (unless some modification are made to both the exe and the dll’s config files).

The same people say, that to use those settings that you happen to have created at design time for the class library, you have to merge somehow its applicationSettings group section into the deployed application.exe.config configuration file of the application that host/consumes the dll. I have yet to see a clear example of how to do it.

However, you can access the class library’s settings you had configured at the last compilation without merging its applicationSettings group section into the executable’s config file. All the settings created in the class library during design exist as properties of the Settings class from the My namespace. Because they are decorated with a[DefaultValueAttribute] these properties will always return a default value even though there is no setting present in the configuration file. If a setting existed, that specifies other value it will override the default one.

So, in other words, you can merge the dll’s settings into the exe’s config file, but you don’t need to, unless you want to provide the user with a way to override the default values – the ones that are specified using the [DefaultValueAttribute] hard coded in the assembly.

Now, you listening to me as I keep talking about the merging of the class library’s applicationSettings group section.
How is that done exactly?.
Where do I copy the settings?
Do I just grab the setting elements and stick them in the exe’s configuration file in the applicationSettings group section?
I could not find a practical example on any forum and I just assumed that this is how it should be done. While developing my solution was not that obvious that I am wrong. The truth came out only when I deployed my application at the client. The application instead of using the newly configured values was constantly defaulting back to the setting values I specified during design at the last compilation.

So, let’s say that I want to provide the user with a configuration file, where he could change setting values and they actually stick. I couldn’t find anything specific on MSDN so anybody who knows any material, please let me know. What I am presenting you in the next lines I discovered by trial and error.

Anyway, let us use a very simple, practical example. Let’s use VB.NET

1. Create a Class Library project called ClassLibrary.
2. Click on the Solution Explorer’s toolbar on Showing all files button
3. Expand MyProject and double click Settings.settings.
4. Add a setting called Message, application scoped whose value is “Hello!”.
5. Create a property in Class1.vb (the automatically added class)

image
6. Create a VB WinForms project and call it WinForm.
7. Add a reference to the ClassLibrary project.
8. Add a button to the already created Form1 and double click on it.
9. Add the some code to the the Button1_Click handler. Should look like this.

image

10. Have the WinForm project “Set as Startup project”

Now, while in the IDE everything works beautifully. Run the solution and you’ll get the expected Hello! when you press the button. If you go and change the setting in the app.config of the library to say “Good bye!” and you run the solution again you get a “Good bye!”

However, what we want to do is to simulate a run outside the development environment.

1. Right click on the WinForm project and chose “Open in Explorer”.
2. Get to the Debug folder. Note that there’s no WinForm.exe.config file yet. Let’s create one quickly.
3. Switch back to VS and while the WinForm project is selected click to Show All Files.
4. Expand MyProject, open Settings.settings, create a setting “A” with value “A”(doesn’t matter what) and save.

There we go, an App.config was created and if I build this solution, the Debug folder will copy it to a WinForm.exe.config.

So far we have two configuration files in the output folders of each project:

ClassLibrary.dll.config
image

into the WinForm’s config

WinForm.exe.config
image

The question is what and how do we merge the ClassLibrary.dll.config into WinForm.exe.config? If we use the Settings designer that we can invoke by double clicking the Settings.settings file and add the Message setting into the WinForm.exe.config it will not work. Not even when ran from the IDE.

WinForm/Settings.settings

image

The message box will display the default value of the Message property – the one persisted in the ClassLibrary assembly.

First Method

However, if we modify the Winform project’s app.config, by basically copying the section definition and then the whole section itself from the app.config of the ClassLibrary project to their corresponding places in the WinForm project’s app.config, and then compile we obtain a Winform.exe.config that must look like this.

WinForm.exe.config

image

Notice that as we did not have any relevant settings in the WinForm project itself we got rid of the section definition and the settings we added previously.

An unfortunate thing is that the Settings designer when invoked again will pick up this setting and import it but it will save it back to the app.config in the <WinForm.My.MySettings> section if you answer yes to save the app.config changes when asked. This won’t harm unless there’s somehow code in the WinForm assembly that uses a property in My.Settings called Message. If it does annoy you, you’ll have to manually delete it and refrain from saving changes that the Settings designer might want to apply to app.config.

Second Method

If for various reasons you want to keep the configuration in two separate file – maybe you grew fond of the ClassLibrary.dll.config name – you can modify the WinForm project’s app.config to look like:

WinForm.exe.config

image

and copy the ClassLibrary.dll.config from the output folder of the ClassLibrary project to the output folder of the WinForm project after you removed some parts and it looks like this.

ClassLibrary.dll.config

image

A retrieve request to CrmService or the pre/post image does not seem to contain all the entity attributes.

You are developing a plug in for Microsoft CRM 4.0.

Long story short, the IPluginExecutionContext instance passed to the plugin code has an InputParameters property who might contain a “Target” property. This property may be your entity so most common code examples will try to cast it to a DynamicEntity.

DynamicEntity entity = context.InputParameters.Properties["Target"] as DynamicEntity;

If you end up with a non null value you are probably in possession of your entity.

By default the code that you write in the plugin will have access under the DynamicEntity.Properties collection to only attributes of the entity that just got created or updated or deleted.

To access all the attributes, even the one that did not change you can either spawn an instance of the CrmWebService calling contect.CreateCrmWebService or register for a pre/post image.

Now, let us get to the problem that I encountered. Inspecting the entity.Properties items I noticed that they are less, … way less than the total number of attributed the entity/table has.
It took me a while until I realized that the DynamicEntity.Properties collection will not contain attributes that have null (or Nothing) values even when you implicitly requested the entity.

This is something I read in a book so while I was trying to understand what is happening the answer was actually lurking in the recesses of my brain. When I inspected the database record of the entity I just created it struck me.

Hopefuly this will save you some time if you get here first. :D

Cheers.

Creating a linked server dynamically and executing a query against it.

Sometimes from a SQL script running on server A we need to execute a query against a database on a different SQL server B or against any other kind of database. If this other database can normally be linked to your main SQL server A than you can definitelly do that at design time and solve your problem. The question is can you do it at run time?

I mean what if you do something in a SQL script on your main server A and you have to reach out and fetch some information from another database on a server whose names are passed as parameters or read from a config file.

The script below shows how to link a Sql database on another server dynamically at runtime and then issue a query against a table from that database.

DECLARE @v_ServerName varchar(100)
DECLARE @qry varchar(500), @v_DataBaseName varchar(50)

IF NOT EXISTS (SELECT 1 Where Exists (Select [SERVER_ID] From sys.servers WHERE [Name]=@v_ServerName))
EXEC sp_addlinkedserver @v_ServerName, N'Any', N'SQLNCLI', @v_ServerName;
SET @qry = 'SELECT * FROM [' + @v_ServerName + '].[' + @v_DataBaseName + '].[dbo].[TableName] WHERE

EXECUTE (@qry);

Try it and let me know it worked for you.

Me, MyClass or MyBase

Me

According to MSDN the Me keyword provides a way to refer to the specific instance of a class or structure in which the code is currently executing. Me behaves like either an object variable or a structure variable referring to the current instance. Using Me is particularly useful for passing information about the currently executing instance of a class or structure to a procedure in another class, structure, or module.

Let’s say that in Visual Basic .Net when you create a form you want to explicitly create the default constructor.
You may be probably doing this to add some extra-initialization to the constructor. Another reason could be to make it obvious that a constructor exists as otherwise a default constructor is provided implicitly and transparently.

Now when you just typed:

Public Sub New()

and pressed Enter, Visual Studio will automatically insert code by adding some comments and a call to

InitializeComponent()

This method is responsible of instantiating all the controls that you placed on the form at design time and many others … so you want to have it called from any other constructors you may end up with. This method would have been called automatically by the default provided constructor.

So, to recap, the default constructor that you just typed, looks like this:

Public Sub New()
' This call is required by the designer.
 InitializeComponent()
' Add any initialization after the InitializeComponent() call.
End Sub

Let’s say we need a constructor that takes a parameter. To preserve having InitializeComponent() called but to not have to call it explicitly we call the default constructor from this new one  :

Public Sub New(param as String)
‘Calls the parameterless constructor thus InitializeComponent gets called as well
 Me.New() txtParam.Text = param
End Sub

If we need another constructor that take two parameters we do the same only this time we call the constructor that takes one parameter:

Public Sub New(param as String, param2 as String)
‘ same thing; calling the previous constructor to preserve the initialization
 Me.New(param)
 txtParam2.Text = param2
End Sub

Again, we avoided duplicating the code that calls the InitializeComponent and some other code the previous parameterized constructor was executing by simply invoking that constructor through the use of the Me keyword.

What about MyClass? Well, with this one it is a bit more complicated.

According to MSDN the MyClass keyword behaves like an object variable referring to the current instance of a class as originally implemented. MyClass is similar to Me, but all method calls on it are treated as if the method were NotOverridable.

One thing for starters is that you would use this keyword in a base class’ code not in the inheritor. Why? Because it offers you a mechanism to write methods in that base class that circumvent polymorphism and guaranties a client a certain original/base functionality even when called from a derived class instance.

Let’s say we have:

Class baseClass
Public Sub testMethod()
 MsgBox("Base class string")
 End Sub
End Class

and we decided we need new functionality for testMethod but we need to preserve the baseClass functionality. We change the testMethod in the base class to Overridable and redefine it in a derived class with Overrides (this will give you the polymorphic behaviour):

Class baseClass
 Public Overridable Sub testMethod()
  MsgBox("Base class string")
 End Sub
End Class

Class derivedClass : Inherits baseClass
 Public Overrides Sub testMethod()
  MsgBox("Derived class string")
 End Sub
End Class

Now, if we instantiate a derivedClass

Dim testObj As derivedClass = New derivedClass()
testObj.testMethod()

and we call testMethod we will se the “Derived class string” being displayed. Even if we cast or declare the testObj instance to or as a baseClass, still the overridden method will execute. Even if you define another method on the base class that calls testMethod

Class baseClass
 Public Overridable Sub testMethod()
  MsgBox("Base class string")
 End Sub
 
 Public Sub useTestMethod()
  ‘ The following call uses the calling class's version,
  ‘ even if that version is an override.
  Me.testMethod()
 End Sub
End Class

the testMethod that will execute will be the one on the derivedClass (as long as the instance is a derivedClass)

So, how can we get the original testMethod execute on a derivedClass instance. Well, we can try to re-write the base class like this:

Class baseClass
 Public Overridable Sub testMethod()
  MsgBox("Base class string")
 End Sub

 Public Sub useBaseTestMethod()
  ' The following call uses this version and not any override.
  MyClass.testMethod()
 End Sub
End Class

and have the client call useBaseTestMethod instead of testMethod. The difference is that calling testMethod on a derivedClass instance you will get derivedClass functionality but when you call useBaseTestMethod you will get the baseClass original testMethod to execute and only because of the special way of invoking it: prefixed by the MyClass keyword.

So , MyClass allows the designer of a class to say: I want this method call, when it invokes certain other methods of this class, to always invoke whatever implementation (or portion of implementation) I wrote for those in this class and never ever even think of executing the override versions.

And MyBase?

Considering the previous example if you would want to create a method in the derivedClass that performs both the functionality from the baseClass and the new one in from the derived class. If the testMethod of the derivedClass changed to:

Class derivedClass 
 Inherits baseClass
Public Overrides Sub testMethod()
 MyBase.testMethod()
 MsgBox("Derived class string")
 End Sub
End Class

The addition of the MyBase.testMethod() allows us to execute the baseClass functionality first so we get both message boxes displayed. So, if we have the possibility of calling MyBase.testMethod to obtain baseClass functionality, what good is MyClass then?

Let us not forget that what we were talking about was calling base functionality from the baseClass and you cannot use MyBase in the baseClass. More, MyClass is about enforcing and assuring the consumer that when he calls an inherited method M1 that in turn calls (using MyClass) another inherited method M2 that was in the meantime overridden, the base class code will execute always for M2. MyClass never invokes base class functionality but is merely used in the base class to set an execution path in stone avoiding the situation when polymorphism would redirect the execution to the new version of M2 thus changing M1’s expected functionality.

Cheers.