Debugging code called while NUnit testing.

This post is intended mostly as a reminder for myself.
This is probably a beaten path so I expect a lot of people already know how to do this if they need.

Let’s say you are executing your NUnit tests using the NUnit console.
Something is happening and you want to be able to stop at a breakpoint somewhere in the code that you are testing.
So, while you have your NUnit console executable up and running, in Visual Studio go to Tools\Attach to Process ….
This will open a dialog where you can select the process you want to attach to. This is : “nunit-agent.exe” and after you selected it click Attach. (Leave all the other setting settings to their default values: Transport=Default, Qualifier=<your_computer_name>, Attach to=Automatic)

We will assume the dialog was dismissed cleanly with no errors and you are back to the main VS window. At this point, if you have a breakpoint in your to be tested code, switch to NUnit console and just Run (or Run All) tests. Of course the portion of the tested code with a breakpoint in it has to be called eventually (directly or indirectly) from the testing code. If that is the case, the execution will now stop at that breakpoint allowing you to go step by step, investigate the values of variables and so on.

Hopping this helps,
Cheers

Unable to access computer B on a windows network from computer A.

\\myNAS is not accessible. You might not have permission to use this network resource. Contact the administrator of this server to find out if you have access permissions.Access is denied

I ran into this nightmare of a problem last Friday. Honestly I don’t know what happened, but all of the sudden, from the workstation I cannot access anymore my Synology NAS. I mean by name or fully qualified domain name. By IP address it was working fine but you don’t want that approach to resolve the problem unless you are desperate.

It took Saturday, Sunday and Monday on and off grinding at the issue to finally realize what the problem was. I have to mention that from other machines I could access the NAS just fine. More, I could access other hosts on my network from my workstation with no problems. This was strictly an issue between my station and my NAS.

After numerous Wireshark traces and struggling to discern which ones are the records in those traces that represent the failed attempts to access my NAS, I started to believe more and more that this record was the one pointing out the problem: tree connect andx response error status_access_denied

A simple search on google takes me to this blog entry. Now even though it is only similar to my problem it made me remember a Kerberos error that I have seen in the Wireshark traces, in a reply from my DC: KRB5 KRB Error: KRB5KDC_ERR_C_PRINCIPAL_UNKNOWN . I should mention that my NAS is part of a domain and it uses AD authentication.

Naturally as the next step, I went automatically searching for the request related to the above reply: KRB5 AS-REQ. So, I’m expanding the NetBIOS section of the request in Wireshark, to see what is the name of the principal that my DC does not recognize. I was happy to see this:  Client Name (Principal): admin Realm: MYDOMAIN. I know for sure I don’t have such a user in MYDOMAIN so where is this coming from and why is it trying to authenticate using this specific account?

Thanks to my memory who’s still in decent shape I remember while analysing the Event Logs, seeing a very isolated warning in the System event log that was happening during the Windows booting procedure. This is the warning’s message:

The password stored in Credential Manager is invalid. This might be caused by the user changing the password from this computer or a different computer. To resolve this error, open Credential Manager in Control Panel, and reenter the password for the credential MYDOMAIN\admin.

This particular error was a confirmation that somewhere, Windows is storing the wrong credentials that are to be used automatically when accesing my NAS.

What do you want more clearer than that? Kudos to whomever was thoghtfull enough to put that error message. I immediatelly opened Control Panel, typed Credential Manager in the Search box. Clicked on Credential Manager link that showed up. It took me two seconds to spot the entry for my NAS’ name in the Windows Credentials section. I deleted the entry, went back to Windows explorer and tried both  \\myNAS and \\myNAS.mydomain.local and they worked!

Cheers.

Exe, Dll assemblies and their config file applicationSettings group section.

The background

I am reading articles, forum posts about applicationSettings for almost a week now.

In almost every thread there is someone that appears to have correctly pointed out that when deployed, class libraries cannot have config files as executables do. I mean you can always have the file there ClassLibrary.dll.config, but it won’t make any difference if you just copied it from the output folder of the class library project into the folder where the application was deployed.

The application’s code will not be able to read any settings from that file (unless some modification are made to both the exe and the dll’s config files).

The same people say, that to use those settings that you happen to have created at design time for the class library, you have to merge somehow its applicationSettings group section into the deployed application.exe.config configuration file of the application that host/consumes the dll. I have yet to see a clear example of how to do it.

However, you can access the class library’s settings you had configured at the last compilation without merging its applicationSettings group section into the executable’s config file. All the settings created in the class library during design exist as properties of the Settings class from the My namespace. Because they are decorated with a[DefaultValueAttribute] these properties will always return a default value even though there is no setting present in the configuration file. If a setting existed, that specifies other value it will override the default one.

So, in other words, you can merge the dll’s settings into the exe’s config file, but you don’t need to, unless you want to provide the user with a way to override the default values – the ones that are specified using the [DefaultValueAttribute] hard coded in the assembly.

Now, you listening to me as I keep talking about the merging of the class library’s applicationSettings group section.
How is that done exactly?.
Where do I copy the settings?
Do I just grab the setting elements and stick them in the exe’s configuration file in the applicationSettings group section?
I could not find a practical example on any forum and I just assumed that this is how it should be done. While developing my solution was not that obvious that I am wrong. The truth came out only when I deployed my application at the client. The application instead of using the newly configured values was constantly defaulting back to the setting values I specified during design at the last compilation.

So, let’s say that I want to provide the user with a configuration file, where he could change setting values and they actually stick. I couldn’t find anything specific on MSDN so anybody who knows any material, please let me know. What I am presenting you in the next lines I discovered by trial and error.

Anyway, let us use a very simple, practical example. Let’s use VB.NET

1. Create a Class Library project called ClassLibrary.
2. Click on the Solution Explorer’s toolbar on Showing all files button
3. Expand MyProject and double click Settings.settings.
4. Add a setting called Message, application scoped whose value is "Hello!".
5. Create a property in Class1.vb (the automatically added class)

image
6. Create a VB WinForms project and call it WinForm.
7. Add a reference to the ClassLibrary project.
8. Add a button to the already created Form1 and double click on it.
9. Add the some code to the the Button1_Click handler. Should look like this.

image

10. Have the WinForm project "Set as Startup project"

Now, while in the IDE everything works beautifully. Run the solution and you’ll get the expected Hello! when you press the button. If you go and change the setting in the app.config of the library to say "Good bye!" and you run the solution again you get a "Good bye!"

However, what we want to do is to simulate a run outside the development environment.

1. Right click on the WinForm project and chose "Open in Explorer".
2. Get to the Debug folder. Note that there’s no WinForm.exe.config file yet. Let’s create one quickly.
3. Switch back to VS and while the WinForm project is selected click to Show All Files.
4. Expand MyProject, open Settings.settings, create a setting "A" with value "A"(doesn’t matter what) and save.

There we go, an App.config was created and if I build this solution, the Debug folder will copy it to a WinForm.exe.config.

So far we have two configuration files in the output folders of each project:

ClassLibrary.dll.config
image

into the WinForm’s config

WinForm.exe.config
image
   
The question is what and how do we merge the ClassLibrary.dll.config into WinForm.exe.config? If we use the Settings designer that we can invoke by double clicking the Settings.settings file and add the Message setting into the WinForm.exe.config it will not work. Not even when ran from the IDE.

WinForm/Settings.settings

image

The message box will display the default value of the Message property – the one persisted in the ClassLibrary assembly.

First Method

However, if we modify the Winform project’s app.config, by basically copying the section definition and then the whole section itself from the app.config of the ClassLibrary project to their corresponding places in the WinForm project’s app.config, and then compile we obtain a Winform.exe.config that must look like this.

WinForm.exe.config

image

Notice that as we did not have any relevant settings in the WinForm project itself we got rid of the section definition and the settings we added previously.

An unfortunate thing is that the Settings designer when invoked again will pick up this setting and import it but it will save it back to the app.config in the <WinForm.My.MySettings> section if you answer yes to save the app.config changes when asked. This won’t harm unless there’s somehow code in the WinForm assembly that uses a property in My.Settings called Message. If it does annoy you, you’ll have to manually delete it and refrain from saving changes that the Settings designer might want to apply to app.config.

Second Method

If for various reasons you want to keep the configuration in two separate file – maybe you grew fond of the ClassLibrary.dll.config name – you can modify the WinForm project’s app.config to look like:

WinForm.exe.config

image

and copy the ClassLibrary.dll.config from the output folder of the ClassLibrary project to the output folder of the WinForm project after you removed some parts and it looks like this.

ClassLibrary.dll.config

image

A retrieve request to CrmService or the pre/post image does not seem to contain all the entity attributes.

You are developing a plug in for Microsoft CRM 4.0.

Long story short, the IPluginExecutionContext instance passed to the plugin code has an InputParameters property who might contain a “Target” property. This property may be your entity so most common code examples will try to cast it to a DynamicEntity.

DynamicEntity entity = context.InputParameters.Properties["Target"] as DynamicEntity;

If you end up with a non null value you are probably in possession of your entity.

By default the code that you write in the plugin will have access under the DynamicEntity.Properties collection to only attributes of the entity that just got created or updated or deleted.

To access all the attributes, even the one that did not change you can either spawn an instance of the CrmWebService calling contect.CreateCrmWebService or register for a pre/post image.

Now, let us get to the problem that I encountered. Inspecting the entity.Properties items I noticed that they are less, … way less than the total number of attributed the entity/table has.
It took me a while until I realized that the DynamicEntity.Properties collection will not contain attributes that have null (or Nothing) values even when you implicitly requested the entity.

This is something I read in a book so while I was trying to understand what is happening the answer was actually lurking in the recesses of my brain. When I inspected the database record of the entity I just created it struck me.

Hopefuly this will save you some time if you get here first. :D

Cheers.

Creating a linked server dynamically and executing a query against it.

Sometimes from a SQL script running on server A we need to execute a query against a database on a different SQL server B or against any other kind of database. If this other database can normally be linked to your main SQL server A than you can definitelly do that at design time and solve your problem. The question is can you do it at run time?

I mean what if you do something in a SQL script on your main server A and you have to reach out and fetch some information from another database on a server whose names are passed as parameters or read from a config file.

The script below shows how to link a Sql database on another server dynamically at runtime and then issue a query against a table from that database.

DECLARE @v_ServerName varchar(100)
DECLARE @qry varchar(500), @v_DataBaseName varchar(50)

IF NOT EXISTS (SELECT 1 Where Exists (Select [SERVER_ID] From sys.servers WHERE [Name]=@v_ServerName))
EXEC sp_addlinkedserver @v_ServerName, N'Any', N'SQLNCLI', @v_ServerName;
SET @qry = 'SELECT * FROM [' + @v_ServerName + '].[' + @v_DataBaseName + '].[dbo].[TableName] WHERE

EXECUTE (@qry);

Try it and let me know it worked for you.

Me, MyClass or MyBase

Me

According to MSDN the Me keyword provides a way to refer to the specific instance of a class or structure in which the code is currently executing. Me behaves like either an object variable or a structure variable referring to the current instance. Using Me is particularly useful for passing information about the currently executing instance of a class or structure to a procedure in another class, structure, or module.

Let’s say that in Visual Basic .Net when you create a form you want to explicitly create the default constructor.
You may be probably doing this to add some extra-initialization to the constructor. Another reason could be to make it obvious that a constructor exists as otherwise a default constructor is provided implicitly and transparently.

Now when you just typed:

Public Sub New()

and pressed Enter, Visual Studio will automatically insert code by adding some comments and a call to

InitializeComponent()

This method is responsible of instantiating all the controls that you placed on the form at design time and many others … so you want to have it called from any other constructors you may end up with. This method would have been called automatically by the default provided constructor.

So, to recap, the default constructor that you just typed, looks like this:

Public Sub New()
' This call is required by the designer.
 InitializeComponent()
' Add any initialization after the InitializeComponent() call.
End Sub

Let’s say we need a constructor that takes a parameter. To preserve having InitializeComponent() called but to not have to call it explicitly we call the default constructor from this new one  :

Public Sub New(param as String)
‘Calls the parameterless constructor thus InitializeComponent gets called as well
 Me.New() txtParam.Text = param
End Sub

If we need another constructor that take two parameters we do the same only this time we call the constructor that takes one parameter:

Public Sub New(param as String, param2 as String)
‘ same thing; calling the previous constructor to preserve the initialization
 Me.New(param)
 txtParam2.Text = param2
End Sub

Again, we avoided duplicating the code that calls the InitializeComponent and some other code the previous parameterized constructor was executing by simply invoking that constructor through the use of the Me keyword.

What about MyClass? Well, with this one it is a bit more complicated.

According to MSDN the MyClass keyword behaves like an object variable referring to the current instance of a class as originally implemented. MyClass is similar to Me, but all method calls on it are treated as if the method were NotOverridable.

One thing for starters is that you would use this keyword in a base class’ code not in the inheritor. Why? Because it offers you a mechanism to write methods in that base class that circumvent polymorphism and guaranties a client a certain original/base functionality even when called from a derived class instance.

Let’s say we have:

Class baseClass
Public Sub testMethod()
 MsgBox("Base class string")
 End Sub
End Class

and we decided we need new functionality for testMethod but we need to preserve the baseClass functionality. We change the testMethod in the base class to Overridable and redefine it in a derived class with Overrides (this will give you the polymorphic behaviour):

Class baseClass
 Public Overridable Sub testMethod()
  MsgBox("Base class string")
 End Sub
End Class

Class derivedClass : Inherits baseClass
 Public Overrides Sub testMethod()
  MsgBox("Derived class string")
 End Sub
End Class

Now, if we instantiate a derivedClass

Dim testObj As derivedClass = New derivedClass()
testObj.testMethod()

and we call testMethod we will se the “Derived class string” being displayed. Even if we cast or declare the testObj instance to or as a baseClass, still the overridden method will execute. Even if you define another method on the base class that calls testMethod

Class baseClass
 Public Overridable Sub testMethod()
  MsgBox("Base class string")
 End Sub
 
 Public Sub useTestMethod()
  ‘ The following call uses the calling class's version,
  ‘ even if that version is an override.
  Me.testMethod()
 End Sub
End Class

the testMethod that will execute will be the one on the derivedClass (as long as the instance is a derivedClass)

So, how can we get the original testMethod execute on a derivedClass instance. Well, we can try to re-write the base class like this:

Class baseClass
 Public Overridable Sub testMethod()
  MsgBox("Base class string")
 End Sub

 Public Sub useBaseTestMethod()
  ' The following call uses this version and not any override.
  MyClass.testMethod()
 End Sub
End Class

and have the client call useBaseTestMethod instead of testMethod. The difference is that calling testMethod on a derivedClass instance you will get derivedClass functionality but when you call useBaseTestMethod you will get the baseClass original testMethod to execute and only because of the special way of invoking it: prefixed by the MyClass keyword.

So , MyClass allows the designer of a class to say: I want this method call, when it invokes certain other methods of this class, to always invoke whatever implementation (or portion of implementation) I wrote for those in this class and never ever even think of executing the override versions.

And MyBase?

Considering the previous example if you would want to create a method in the derivedClass that performs both the functionality from the baseClass and the new one in from the derived class. If the testMethod of the derivedClass changed to:

Class derivedClass 
 Inherits baseClass
Public Overrides Sub testMethod()
 MyBase.testMethod()
 MsgBox("Derived class string")
 End Sub
End Class

The addition of the MyBase.testMethod() allows us to execute the baseClass functionality first so we get both message boxes displayed. So, if we have the possibility of calling MyBase.testMethod to obtain baseClass functionality, what good is MyClass then?

Let us not forget that what we were talking about was calling base functionality from the baseClass and you cannot use MyBase in the baseClass. More, MyClass is about enforcing and assuring the consumer that when he calls an inherited method M1 that in turn calls (using MyClass) another inherited method M2 that was in the meantime overridden, the base class code will execute always for M2. MyClass never invokes base class functionality but is merely used in the base class to set an execution path in stone avoiding the situation when polymorphism would redirect the execution to the new version of M2 thus changing M1’s expected functionality.

Cheers.

Silverlight and RIA services – overriding an attribute set in a base class.

Recently I found myself in the following situation. I had let’s say this class: DerivedClass inheriting from a BaseClass on the server side of things.

One of the properties of the BaseClass has the [Required] attribute applied.

public class BaseClass{

[Required]
public string Name { get; set; }

}

It just happen that in my DerivedClass I needed some more validation to be done so I would like to go for a [CustomValidation] attribute with its own custom validation method. More, I don’t need the constraint of the [Required] attribute anymore (if you remember the [Required] attribute, by default, implies the property should not have an empty or null value) so I want to revert its effect.

You can write the following code:

using System.ComponentModel.DataAnnotations

[MetadataType(typeof(DerivedClassMetaData)]
public class DerivedClass : BaseClass{

}

public class DerivedClassMetaData{

[Required(AllowEmptyStrings=true)]
[CustomValidation(typeof(SomeValidationClass), “ValidationMethod”)]
public string Name{ get; set; }

}

public class SomeValidationClass{

public static ValidationResult ValidationMethod(object value, ValidationContext validationContext){
}
}

The most important thing in the code above is the [MetadataType] attribute which is basically allowing me to attach meta data type to my derived class and basically readjusting some of its inherited attributes. One thing I did was to redefine the [Required] attribute to undo the effect of the similar definition on the BaseClass. Secondly I attached a custom validation to the Name property.
A very good resource on custom validation is
here.

On the client side, in the code generated the Name property looks like this:

[System.ComponentModel.DataAnnotations.CustomValidationAttibute(typeof(SomeValidationClass), @”ValidationMethod”)]
[System.ComponentModel.DataAnnotations.Required(AllowEmptyStrings=true)]
public string Name{

}

The whole SomeValidationClass code is brought over from the business side and included in the generated code so the CustomValidationAttribute can find its arguments.

Cheers.

How to become a sysadmin on a SQLEXPRESS 2008 installation when you are not the original installer, SQL authentication is not enabled but you are a Windows administrator.

I have gotten a new job and I am in that phase where you start slowly to setup your machine, the environment etc.

One of the things I had to do is to set up the database for the project my team works on. The database has to be configured locally on a 2008R2 SQLEXPRESS instance (installed already by an system administrator and obviously under a different Windows account).

Long story short when I attempted to create a database I got access denied. Any other attempt to gain administrative rights over the database server failed. Did I mentioned the account I was logged on is part of the local Administrators? Yes, that too and even though I launched the management studio in admin mode as well, it did not help. It seemed as the database server did not really care that I am a mighty administrator on my machine Open-mouthed smile.

Reading some articles out there I learnt that the MS-SQL 2008 version does not include the Windows administrators in the by now very select and limited group of sysadmins. It seems to be true at least in my case. This means that if the person that installs the database server does not make you specifically a sysadmin you will not be automatically a full privileged user over the database server only because you are a Windows admin.

There is a way you can make yourself a sysadmin in a situation like mine where somebody else installed the server, left you out and you do not have access to the original installer.

You have to launch the database server in “single-user” mode which is when your Windows administrator account can act as a sysadmin and you can add it as a login to the database and make it part of the sysadmins role.

  • Start by stopping the SQL server and close the Management Studio
  • Launch a command prompt as an Administrator.
  • Then launch the SQL Server Configuration Manager; select SQL Server services; right click on the SQL Server (SQLEXPRESS) service and click Properties.
  • Select the Service tab and double click on Binary Path; you should get a drop down containing the command that launches that specific SQL instance:
  • image
  • Copy it and paste it in the previously opened command prompt window; add an extra –m to the parameters: “”c:\Program Files\Microsoft SQL Server\MSSQL10.SQLEXPRESS\MSSQL\Binn\sqlservr.exe” –m –sSQLEXPRESS”;
  • Press Enter and it will look like Linux is about to start. When it stops and one of the last lines logged to the window is saying “SQL Server is now ready for client connection” it means you succeeded and your “single-user” instance is running.
  • Launch the SQL Server Management Studio as an Administrator and do whatever you want because you are now a sysadmin; basically you would like to add your Windows account as a SQL login; add the sysadmin server role to this login;
  • When you are done with it return to the command prompt window and press Ctrl+C and respond Y to shutdown the SQL service.
  • Go back to the Management Studio and start your instance the usual way and test. You should be able to do what you please.

Hope this helps.

Cheers.

System.Data.ConstraintException: Failed to enable constraints.

(continued from the title) One or more rows contain values violating non-null, unique, or foreign-key constraints.

I was working on a small project and I used the DataSet designer to quickly obtain an adapter and a strongly typed table.
I parameterized the table adapter so I can provide a different connection string and a different table for the SELECT query.

As always when you use the designer to initially generate the adapter and data table you have to point it to a table.
The designer inspects then the table and builds all the good things: table adapter, strongly typed data table and row, and so on.
The problem is that it hardcodes the connection string and the name of the table in the SELECT command and I would really like to re-use the generated code to obtain data from other tables too. So, I parameterized the adapter to accept a different table knowing well enough that the other tables have to have identical schema/structure.
I guess otherwise, the adapter.GetData() or adapter.Fill() methods will fail, probably with the above message – the one in the title.

If you read the error message in the title it doesn’t give you the impression it would be thrown for a schema discrepancy. Oh, but it does and that’s what confuses you as you don’t suspect a schema difference to be the problem! And we are not necessarily talking of a big discrepancy like missing a field or even a field of a different type.
No, the one that I encountered was caused by a different width of the field. The autogenerated MaxLength for one of my fields (10) was smaller than the width of the same field in a second table that I tried to use the adapter on. The field in the second table was 30 characters wide. This is what caused the exception to be thrown.

So, to get into some details I will start by saying that I am trying to access dbf tables somewhere in a folder. I am using the Visual FoxPro OLEDB driver which has the ability to treat a folder as a database and the dbf files in the folder as tables of the database. The Visual FoxPro ODBC driver does the same but needs a preconfigured entry in the ODBC connections of the system whereas with the OLEDB you can have the connection string built at run-time.

Anyway with MyDataAdapter and FileDataTable classes already generated, I wrote a few lines of code below to test that the adapter and my parameterizations work well enough to fill data from more than the initial table that I used to generate the classes from.

MyTableAdapter sta = new MyTableAdapter();
Test.FileDataTable sdt;
// setting the new connection for the adapter
sta.SetConnectionString(new OleDbConnection(“Provider=VFPOLEDB.1;Data Source=” + @
\\TESTSERVER\c$\FOLDER));
// setting the table we will use the adapter against
sta.SetTable(“FILE1.DBF”);
try {
// attempt to fill the DataTable
sdt = sta.GetData();
}
catch (Exception ex) {
System.Diagnostics.Debug.WriteLine(ex.Message);
throw;
}

Now, this code works well when trying to fetch the records from FILE1.DBF (the original file) but when I tried to do the same from a structurally identical (almost) FILE2.DBF an exception was thrown: System.Data.ConstraintException: Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints.

This message is not helping at all in finding precisely what the problem is and more, there was really no constraints to be concerned with and all the columns of the FileDataTable were defined to support nulls. Anyway, in an effort to see if I can get some more information I did a bit of digging and I was led to the Datatable.GetErrors() method. I thought it is worth a try as I had nothing else to go on with anyway.

So, I made the first modification to the code above by adding a DataRow[] dr = sdt.GetErrors(); line in the catch clause. That really didn’t work, as the line was causing an exception complaining that the sdt variable is null. Pretty obvious I said: “I am trying to call GetErrors on a table that doesn’t exist as it failed to be instantiated because an exception.” So, what I had to do is to instantiate an empty DataTable manually ahead of time, and then, in the try block use adapter.Fill(dataTable) instead of the GetData() syntax. This way by the time we call Fill(dataTable), dataTable is already an instantiated object which we can call GetErrors() on and obtain our array containing the DataRows with the problem.

MyTableAdapter sta = new MyTableAdapter();
Test.FileDataTable sdt = new Test.FileDataTable();
// setting the new connection for the adapter
sta.SetConnectionString(new OleDbConnection(“Provider=VFPOLEDB.1;Data Source=” + @
\\TESTSERVER\c$\FOLDER));
// setting the table we will use the adapter against
sta.SetTable(“FILE.DBF”);
try {
// attempt to fill the DataTable
sta.Fill(sdt);
}
catch (Exception ex) {

// get all the rows involved in errors
DataRow[] dr = sdt.GetErrors();
throw;

}

By expanding the first row in the now populated dr array, and looking at the RowError property, I can see:   “Column ‘prod’ exceeds the MaxLength limit.”
So this is when I checked the table and saw that the prod field was 30 chars wide, clearly more than the 10 chars MaxLength specified in the FileDataTable definition.
Apparently out of two tables which I believed identical the first one had a field narrower than the second table.

If you want to make this go away you could edit the FileDataTable’s problem field definition by modifying its MaxLength to be roomy enough.
Another way is to disable the enforcement of constraints at run-time by setting DataSet.EnforceConstraints = false, with the downside that the data in wider fields will be truncated and lost.

The last code addition is to accommodate this second approach. As the data tables don’t have their own EnforcedConstraints property we have to instantiate a DataSet.  We then add the table to it and then set the EnforceConstraints = false; This will cause the ignoring of the discrepancy between the field width in the data set definition and the width of the field that exists in the physical table. The code will not error out again but the value of the field will be truncated.

The final code could look something like this:

DataSet ds = new DataSet(“Test”)
MyTableAdapter sta = new MyTableAdapter();
Test.FileDataTable sdt = new Test.FileDataTable();
ds.Tables.Add(sdt);
ds.EnforceConstraints = true;
// setting the new connection for the adapter
sta.SetConnectionString(new OleDbConnection(“Provider=VFPOLEDB.1;Data Source=” + @
\\TESTSERVER\c$\FOLDER));
// setting the table we will use the adapter against
sta.SetTable(“FILE.DBF”);
try {
// attempt to fill the DataTable
sta.Fill(sdt);
}
catch (Exception ex) {

// get all the rows involved in errors
DataRow[] dr = sdt.GetErrors();
throw;
}

Cheers.

Talking about TFS 2010 Dashboard Permissions when running in SharePoint Server 2010 – Layers & Layers of Security

  This is a very interesting and revealing article about one of the many very misterious sides of the integration between Team Foundation Server Sharepoint. It is not exhaustive but for the amount of brand new information it contains, I surely not regret the 10 minutes that I spent reading it .

TFS 2010 Dashboard Permissions when running in SharePoint Server 2010 – Layers & Layers of Security

Good luck

Follow

Get every new post delivered to your Inbox.