Then on the stage data store, prior to the data being integrated, a system of checks and balances occurs to vet the data, deduplicate the data, fill in missing values or indicate non applicable data values prior to sorting and then integrating through the ETL process. Then, that column should be checked to validate that only the authorized integration or authorized firecall ID is a value in that column. If updates deletes and/or inserts to a table are to occur then a default constraint should be placed on two columns of datatype varchar(nn) called Last_Updated_By and Created_By that uses the return of the ORIGINAL_LOGIN() or SUSER_ID() functions as the default value. This indicates, at least to me, the lack of a single non user non application ID that is responsible for writing to PROD_ODS. The OP described a situation that appeared to be occurring because multiple users of an application were writing to the Operational Data Store at the same or similar intervals (or worse yet ad hoc updates and inserts) causing deadlocks. Rule 1 of database architecture: The application never writes directly to PROD_ODS. If you have CHAR/VARCHAR columns and are passing parameters to SQL Server from ADO.Net that are searching values and those values have indexes, the indexes won’t be used as the default string parameter type is NVARCHAR and it won’t use a CHAR/VARCHAR index. If you fix indexes to where things happen quickly, it might reduce the risk of the collision that creates the deadlock to where it happens so rarely it isn’t worth the cost of fixing the code. Having lots of rollbacks creates compounding bottlenecks. Transaction rollbacks are single threaded while moving forward in a transaction can use parallelism (it’s why when you try to stop something that ran for 10 minutes sometimes takes hours). Index seeks are good, index and table scans are bad if there are lots of rows. Rerun your queries you copy/paste from profiler in SSMS and look at the execution plans. You’ll find the ones they are causing the deadlocks. The shorter the transactions, the less likely of a deadlock.ġ5GB is small and 1M large rows would be no challenge for hardware running mechanical drives back in 2001. Or, if that's too difficult of a strategy, at least optimizing things so that the transaction runs as fast as possible. Both are waiting on each other, and so this is a deadlock.Īgain, the strategies to fix this is identifying the code that is triggering the deadlocks and fix it so that all procs are accessing tables in the same order. Session 2 has a lock on table 2 and won't release it until it completes both updates. Session 1 has a lock on table 1 and won't release it until it completes both updates. Session 2 starts the second one while session 1 is running the first. No deadlocks will occur here, no matter how many sessions want to run this at once.īut, what if there's a second stored proc with the following: begin tran So, session 2 arrives and sees that session 1 already has a lock on table1. Let's say 10 clients all want to run the following at about the same time, and let's assume this code takes a bit to run: begin tran Program.You get the developers to fix their code.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |