Monday, October 25, 2010

Forms Authentication against Active Directory

While trying to authenticate against AD I ran into some problems and was wondering if it was my ldap connection string.  Found a great way to verify it with the Group Policy Result tool (http://www.microsoft.com/windowsxp/using/setup/expert/gpresults.mspx)
gpresult.exe /r
It lists all sorts of items about the currently logged in user - look for the "User Settings" section, and the first line shows you the exact path to your entry.  In my case:

CN=John Hennesey,CN=Users,DC=domain,DC=com

and there is an entry "group policy was applied from" - there's your server.domain.com.

and the web.config entries (no magic here - this is on thousands of web pages - this is for easy reference in the future)


<connectionStrings>
      <add name="ADConnectionString" connectionString="LDAP://server.domain.com/CN=Users,DC=domain,DC=com"/>
</connectionStrings>

<authentication mode="Forms">
      <forms name=".ADAuthCookie" timeout="10"/>
      </authentication>
<authorization>
      <deny users="?"/>
<allow users="*"/>
</authorization>
<membership defaultProvider="MyADMembershipProvider">
      <providers>
      <add name="MyADMembershipProvider" type="System.Web.Security.ActiveDirectoryMembershipProvider, System.Web, Version=2.0.0.0,         Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" connectionStringName="ADConnectionString" attributeMapUsername="sAMAccountName"/>
      </providers>
</membership>


Couple this with the asp Login control, and all set.  Easy!

Friday, October 22, 2010

Nolocks on a view - does it nolock all the tables in the view?

I received this question from a colleague this week:

"If there is a view that is not using nolock [in the from clause] but you have a stored procedure that uses nolock when it calls the view then do all the FROMs in the view run with nolock?"



The simple answer : Yes, the nolock hint on the view should propagate down to the tables queried in the view*.


A brief visit to our favorite resource, Books Online, reveals the following:
In SQL Server 2005/2008, all lock hints are propagated to all the tables and views that are referenced in a view. Also, SQL Server performs the corresponding lock consistency checks.

 And the asterisk:
*If a table contains computed columns and the computed columns are computed by expressions or functions accessing columns in other tables, the table hints are not used on those tables. This means the table hints are not propagated. For example, a NOLOCK table hint is specified on a table in the query. This table has computed columns that are computed by a combination of expressions and functions that access columns in another table. The tables referenced by the expressions and functions do not use the NOLOCK table hint when accessed.

And my (rather simple) test proved it out:

use master
go

create database lockTest
go


use lockTest
go

create table t1 (f1 int, f2 int)
go

insert into t1 values (1, 1)
insert into t1 values (2, 2)

go


create view v1 as
select *
from t1
go

set transaction isolation level serializable
go

begin tran
select * from v1

insert into v1 values (3,3)

-- go into a new window and run the following:
/*
-- Works
select *
from v1 with (nolock)

-- Hangs
select *
from v1
*/



/*
-- cleanup
use master
go

drop database lockTest
go
*/



Sql - search for column comments

Technical reference - an easy way to query the system tables to search for column comments.

select *
from sys.extended_properties
where class =
  and major_id = object_id(N'<<schema>>.<<table>>') 
  and name = 'MS_Description'

Reporting Services 2008 over SSL - Certificates

(If you are trying to run RS over SSL and want to cut to the chase, skip this paragraph of frustration)

Disclaimer – I am not that savvy with administration – better at the nuts & bolts of programming, so most of this was learning.

Today I needed to experiment with the LogonUser method of the web service – the only way to call it is over SSL.  OK, easy enough, just tell Reporting Services to use SSL – just a check box, right?  Nope.  Do a little reading – all signs point to HttpCfg.exe.  Unless using Vista.  Then it’s netsh.exe.  Need a certificate – must generate one.  Use MakeCert.  OK, got all my parameters in check – generated.  What’s the hash?  Use certutil to get the hash (thumbprint), guid (app id).  Register it with netsh http add …..   ok, everything all lined up.

Now let’s go back into Reporting Services Configuration Manager and instruct it to use the certificate.  Piece of cake, select it from the drop down and fire it up.  Error?  Certificate is not trusted?  Oh good gracious.  Keep on researching, find out just how easy it is to do in XP.  Nothing about how easy it is to do in Vista, just lots of people complaining about how incredibly hard it is.


Oh, what’s this?  IIS 7 has an easy way to do it?  Oh YEAH!  Here are the steps to quickly create a test certificate you can use seamlessly:

1. Open IIS 7, click on the computer you wish to create the certificate for.  In the Content View double click on Server Certificates.


2.    2. The top right has an “Actions” menu.  Click on “Create Certificate Request” – could it really be this easy?



3. Type in a name for your certificate (yes, it really is this easy)


4.  Launch Reporting Services Configuration Manager, go to “Web Service URL”, select your newly created SSL certificate from the list and click apply.  It is just that easy!


And now you can do all of your development testing over SSL locally.






Wednesday, September 15, 2010

One big cube or multiple cubes?

Found a good article by a Chris Webb (SQL Server MVP) who knows quite a bit about analysis services.  This answers the question of "One cube or Multiple?".   Good reading for bigger more complex AS setups.


http://cwebbbi.spaces.live.com/blog/cns!7B84B0F2C239489A!7728.entry

Tuesday, September 14, 2010

Speeding up scalar functions in SQL Server

Our warehouse uses a UDF to check for dates outside of the acceptable range and assign them the designated “unknown dates”.  Why do it in SQL you ask (vs SSIS)?  Because it is an easy way to have control in one central place.  I made a change that pushed the date ranges from hardcoded in a function to table driven so it can be shared across multiple clients and allow them to retain their custom settings during deployments.  It was like someone hit the breaks – sure the queries were very light (7 rows in the table), but calling it several times in each ETL sproc blew it out to 20-30 million times in a single ETL job. Sure the cache hit rate was 100%, but it still added almost 50% to the total ETL time.  What to do… what to do…

I ran across a great article by Itzik Ben-Gan (http://www.sqlmag.com/article/sql-server/inline-scalar-functions.aspx) on how to make your functions perform extremely fast – by simply changing them from scalar to table.  He goes into the details of why it works – I will simply state the benchmarks:

This is calling the function as a scalar with the syntax of:

[DateField] = dbo.fn_CleanDate([column])

CPU
Reads
Writes
Duration
491,918
62,832,254
1
493,181,940

Those are some big numbers…  so if we rewrite the function so that it returns a table with a single field, the changes resulting:

[DateField] = (select value from dbo.fn_SelectCleandate([column]))

CPU
Reads
Writes
Duration
330,176
41,901,480
0
330,524,577

Now for the kicker.  If you select your date values higher up in the sproc and pass them into the function so there is no data access required in the sub-select it performs.  Blazing fast.  Yeah, it might clutter up the syntax a bit, but it is definitely worth it!

[DateField] = (select value from dbo.fn_SelectCleanDate([column], @minDate, @maxDate,@minUnknownDate, @maxUnknownDate)),

CPU
Reads
Writes
Duration
1,670
4,578
0
308


That deserves restating.  308 milliseconds vs. 493 thousand.

Thursday, August 26, 2010

Execute SQL on linked server (pass-through)

So living with one foot in Oracle land is helping me learn some of the lesser-known features of SQL.  For my 2nd post in one day here is another thing I found too good to not spread; executing SQL on a linked server withough SQL interpreting anything (a.k.a. Pass-Through).  While tinkering with using openquery to insert and update, delete was just too darn slow and I knew there had to be a better way.  Turns out the execute function allows you to specify a linked server as a target.  Too cool...





execute ('delete from table_name1' ) at <<linkedServer>>
execute ('delete from table_name2' ) at <<linkedServer>>
execute ('delete from table_name3' ) at <<linkedServer>>
execute ('delete from table_name4' ) at <<linkedServer>>
execute ('delete from table_name5' ) at <<linkedServer>>

In the case above there were foreign key references between the tables - in my example I knew the relationships (thus the order) to clean out the table to avoid FK errors - but with a little probing of the tables in oracle via openquery I'm sure you could produce the necessary deletes in the correct order.


Another way to do it (non pass through) is the following - which I recommend only for selects, as updates/inserts/deletes will be evaluated SQL server side and take too long.



select *
from openquery(<<linkedserver>>, 'select * from table_name1')