Insert..Select is a great command, but sometimes you just need more- sometimes you need to return bits of the information back.
The first thing you will attempt, of course, is to declare the few variables you need. Yes, you will see that you can indeed do that, along with updating columns, all in one call- neat!
[syntax_prettify linenums=”linenums”][email protected] = ID,OldValue = OriginalTable.ValueSELECTValueFROM OriginalTableWHEREOriginalTable.AddDate < GETDATE() - 30[/syntax_prettify]
Unfortunately not only other types of statements (i.e. delete and insert) will fail to do that, but your variable(s) will hold only one value. Yes, you could concatenate it…but are you really going to be doing this crazy 20th century workaround nonsense again? Our old OUTPUT keyword to the rescue- it redirects the processed information back to where you tell it, including a variable table that you might have created just for that purpose.
We’ve already talked about batch processing in SQL - Cursor and Batching, but it’s not just human labor that batching helps us with. One of the best ways to optimize any type of coding is to limit the amount of hits it has to do against storage (physical or otherwise). It could come at an expense of more CPU cycles, so proper compromises need to be made to ensure true performance improvements.
In case of SQL, this is usually achieved by limiting the number of separate calls to the databases/tables and finally limiting the number of requests themselves (select/insert/update/delete), wrapping them into as few as possible… Oh, how many times have I seen very simple procedures with cursors or even just check requests (ie IF EXISTS), followed by possible change requests, followed yet again by closing check or pull requests. Why would you do that if you can easily do all of this at once and allow the engine to properly optimize out of the box?? The harsh cultural weight of our 20th century coding backgrounds, perhaps?
Somewhat surprisingly, batch processing stays as one of the most gaping holes in DB power-users’ knowledgebases… Processing records one by one, searching, fetching, updating- they seem to be fine with all of this no matter the type of the DB, but there is something about batching that many do not seem to grasp. What’s worse, is that they don’t even KNOW that they are really missing out…
Previously I have shared an extremely powerful method of finding objects such as procedures, functions, and triggers…but what about tables (or views)? MS SQL is, once again, able to do this quite easily and logically without the need for any expensive and limited 3rd party software:
[syntax_prettify linenums="linenums"]USE MyDataBase;
ON COLUMNS.TABLE_NAME = TABLES.TABLE_NAME
column_name LIKE '%SEARCHNEEDLE%'
AND TABLE_TYPE <> 'VIEW'[/syntax_prettify]
Yet another “new” feature of MS SQL 05 is an often overlooked “error handling”. The Transact-SQL Try…Catch is designed to operate similar to the exception handler in the Visual/.NET languages- if an error occurs inside the TRY block of a query, control is passed to another group of statements that is enclosed in a CATCH block (and on)… Or so it should- the handler will not help with errors of severity 20+, KILLs, and, of course, with various warnings and messages, so be careful and remember to test all of the scenarios right away.
[syntax_prettify linenums=”linenums”]BEGIN TRY
ERROR_NUMBER() AS ErrorNumber,
ERROR_SEVERITY() AS ErrorSeverity,
ERROR_STATE() AS ErrorState,
ERROR_PROCEDURE() AS ErrorProcedure,
ERROR_LINE() AS ErrorLine,
ERROR_MESSAGE() AS ErrorMessage
SELECT ‘Continue the run’[/syntax_prettify]
Note that this behavior can also be triggered manually, via the RAISERROR command (notice the spelling, with only one E):
[syntax_prettify linenums=”linenums”]RAISERROR(‘Fatal error’,16,1)[/syntax_prettify]
At last, sometimes, when using this functionality, you might need to ensure that you do exit and don’t run the rest of the statements- feel free to add RETURN command in such cases…
MS SQL 2005 and up adds support for the APPLY clause, which, in turn, lets you join a table to any dynamic sets, such as table-valued-functions or even derived tables. While we can argue over the benefits and dangers of the latter (another set of article perhaps?), being able to do things more than one way is certainly always awesome.
The two new functions (CROSS APPLY and OUTER APPLY) are essentially INNER and OUTER JOINs for table-functions (you cannot directly join them like tables). Here is just a simple example of the usage (don’t hesitate to expand on it):
[syntax_prettify linenums=”linenums”]SELECT All.CustomerID
Customer as C
fnSelectAllChildren(C.CustomerID) AS All
WHERE C.Status = 1[/syntax_prettify]
This technique can also be used in more complex queries alongside derived tables, FOR XML, etc., covering the need for things such as inline multi-row CONCAT (multiple rows into a single column):
SELECT TOP 1
ISNULL(LEFT(o.list, LEN(o.list) - 1), ‘Unknown’)
Orders AS ito2
CROSS APPLY (
CONVERT(VARCHAR(12), ServiceType) + ‘,’ AS [text language=”()”][/text]
OrderService AS itos2 (NOLOCK)
OrderServiceType AS itost (NOLOCK)
itost.Id = itos2.OrderServiceType
itos2.OrderId = ito.ID
) o (list)
ito2.ID = ito.ID
) AS AllTypes
Orders (NOLOCK) ito
ito.Status = 0[/syntax_prettify]
What do you do if you need to find a certain string used in a stored procedure? What if you need to be completely certain that you can remove the object and there are no dependencies within the server itself? There are a number of 3rd party tools that allow searching within the SQL Server database schemas, but some are slow due to precaching and some are simply not powerful enough due to lack of options. SQL itself comes to the rescue!
The query below can be as simple and as advanced as you want it to be, but no matter what- it is always fast and easy to use:
OBJECT_NAME([id]) AS ‘ObjectName’,
MAX(CASE WHEN OBJECTPROPERTY([id], ‘IsProcedure’) = 1 THEN ‘Procedure’
WHEN OBJECTPROPERTY([id], ‘IsScalarFunction’) = 1 THEN ‘Scalar Function’
WHEN OBJECTPROPERTY([id], ‘IsTableFunction’) = 1 THEN ‘Table Function’
WHEN OBJECTPROPERTY([id], ‘IsTrigger’) = 1 THEN ‘Trigger’
END) AS ‘ObjectType’
OBJECTPROPERTY([id], ‘IsProcedure’) = 1
OR OBJECTPROPERTY([id], ‘IsScalarFunction’) = 1
OR OBJECTPROPERTY([id], ‘IsTableFunction’) = 1
OR OBJECTPROPERTY([id], ‘IsTrigger’) = 1
[alert type=”info”]4/2/2012: Updated with more advanced SQL example[/alert]
Following up on the previous article SQL – The Power of NULLs, here are some of the things to keep in mind when you do decide to use NULL in your work.
By far, one of the most common mistakes is to assume that <> (not equal) or even = (equal) logic covers NULLs. That is completely incorrect in both cases; NULL is neither equal to anything nor is it not equal to anything, including itself. Consider the following code:
[syntax_prettify]SELECT CASE WHEN 1<>NULL THEN 1 WHEN NULL=NULL THEN 0 END[/syntax_prettify]
Of course the end result will be neither- NULL
What about this one: will WHEN help us be able to do this comparison on the fly?
[syntax_prettify]SELECT CASE NULL WHEN NULL THEN 1 ELSE 0 END[/syntax_prettify]
Just a little- the ELSE
logic will kick in covering “all other” possibilities, but the CASE
logic still relies on = and <> just the same way- NULL
is still nothing, it still never rings TRUE
So what are we left with? Well, you can always try using ISNULL (the great counterpart of NULLIF) and COALESCE in more advanced cases. Either way, remember- NULLs can help you do really interesting things with your DB and your queries, you just need to be careful, that’s all.
Some people despise NULLs for their unusual behavior and apparent complexity, while some remain suspicious simply because of a lack of experience with them. Although at times it is not a good idea to use them in database design, they often come in handy in queries alone. They let us easily test for the “third case”- complete lack of data. Assume you have a bit column that only allows 0 and 1, so you can distinguish if something is “on” or “off”, what better way than the NULL is there for you to test for the row’s presence, or vice versa?..
BitColumn IS NOT NULL[/sql]
This approach can also be easily extended to filter vast chunks of data. Suppose you have a big table of users and a table of their last logins (i.e. audit). By left joining the latter and checking for the absent rows, you can say which users haven?t logged-in in the last 3 weeks (or never logged-in for that matter):
LEFT JOIN AuditUsers
ON AuditUsers.UserID = Users.ID
AND ChangeDate > DATEADD(week, -3, GETDATE())
AuditUsers.ID IS NULL[/sql]
As you can see, the possibilities are really endless. Here is another neat trick that probably deserves an article just by itself. I actually use this more often than I would like, but it does allow me to easily set “preferences” in the data output (or filter it altogether) based on seemingly unrelated data fields. All of this is done by simply having a specific CASE statement. Here is an example derived from the one above. This approach is best when used against multiple tables and columns, but this should suffice just to show how to use it. In this example we will first see users that never logged in at all, then all users who logged in over 3 weeks ago, and then all the rest. Note that you can also use TOP to limit just to the first couple of rows that come out, very handy:
LEFT JOIN AuditUsers
ON AuditUsers.UserID = Users.ID
CASE WHEN AuditUsers.ID IS NULL THEN 0
WHEN ChangeDate < DATEADD THEN 1
Edited by Katya Pupko