Last post Jun 24, 2020 06:26 PM by bruce (sqlwork.com)
Jun 24, 2020 04:23 PM|erotavlas|LINK
I have a client application that interacts with the backend database via an ASP.NET Web API. (2.0) and in the web api all the controllers make changes to the database using Entity Framework 6
Currently if two clients are launched and both make the same additions to one of the tables, duplicate rows occur. This is only one of the issues.
So I want to implement optimistic locking to reject one of the updates and was looking for guidance on how to do so.
I found this link which describes using [ConcurrencyCheck] attribute (except I'm not using EF Core).
Is this the correct way? Are there any other recommendation or best practices I should follow ((specifically for the interaction with Web API and Entity Framework)?
NOTE: My entities look like this
Document Table --- Keywords Table
(Where Document Id is a primary Key for Document Table and Foreign Key in Keywords Table.)
User can highlight keywords in the client app and they get recorded in the Keywords table with start and end character positions.
So when user A and B both GET document to start highlighting in the app, currently they are able to both add the same keywords to the Keywords table. So update to a single row in the Keywords table isn't too much of a concern, Its more about the change
to the set of keywords as a whole that belong to a particular Document Id.
Jun 24, 2020 05:51 PM|mgebhard|LINK
Have you read the EF Core docs?
Jun 24, 2020 06:18 PM|erotavlas|LINK
I assume this exists for regular EF? (not using core)
In any case I'll try and find it
Edit: I found it - https://docs.microsoft.com/en-us/aspnet/mvc/overview/getting-started/getting-started-with-ef-using-mvc/handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application
Jun 24, 2020 06:26 PM|bruce (sqlwork.com)|LINK
it won't work for your case. optimistic locking works by adding a sql timestamp column to a table row. when you do an update, you include the timestamp value, and the update will fail if they do not match. sql server maintains the timestamp, so its reliable.
in your case you are adding and deleting rows. this requires using an additional lock table. so when they read the keyword table rows they get a lockid (timestamp). and it must match on the update.
I type add a lock row to the table at document read for each user. then when update check if lock row exists for the doc/user, then update and delete lock rows for other users for the same doc. be sure to do update / delete as a repeatable transaction.
you could also decide that two users adding different keywords for the same document is ok. In this case you just detect duplicate rows (insert if not exists, delete if exists)