I have a service worker up and running in my ASP.NET Core 2.2 app. I am exposed to a problem that I want to solve which is to create a fault-tolerant background service.
I have a shared table on which two different jobs select and update data. Basically what we are doing is that we are syncing Office Calendar and Mail so the token has to be stored in the database. If any of the services fails to access the API they refresh
access by using refresh token and then updates it in the database as well. Now there might be a situation that one of the services is working on the same row while other has updated the tokens in the database and we might end up in an infinite loop and both
update the tokens over and over again. What should be the workaround to avoid such scenarios?
use proper locking of the token use. They should be checked out for update, and checked in when valid. checkout should have timeouts in case the process that did the check out fails.
logic
new = false
loop
token = getToken(new)
if useToken() exit loop
new = true
end loop
func getToken(new)
loop
if new
check out token
if token is already checked out
new = false
contine loop
end if
token = create new database token
release checkout
return token
else
token = read current database token
if token is checked out
delay
continue loop
end if
if check out is expired
new = true
end if
end loop
Hello, Thanks for your reply. There is another thing that I would like to know since there is only one job that gets the data from the Office API and push that to database. I have seen service workers in Laravel where we have the option to queue the data
into the Redis or Database and then there is another worker which processes that entry from Database or Redis. Do we have such a feature/framework/library for Asp.NET Core?
Moreover, We are near to MVP deployment and we upload the .dll files over and over again. This definitely interrupts the job in between so for example if there are 100 emails that I received from Outlook API now we processed 50 of them and then dotnet service
on Linux is restarted. Since we have the previous deltaLink/Sync token because after the 100 emails are processed then that sync token is stored in the database. When the service is restarted the system is going to process those previous imported entries again
so we gonna have redundant data in the database. How can we avoid such situations?
Message queue accross different process and different machine -> No asp.net does not have built in support fot it, you need to utlize a 3rd party message queue product such as RabbitMQ, Kafka, or redis. All these message queue product have official
APIs for .net and I belive you can find them in nuget easily.
For the second question, the best way is to write your process status in some places and once there are service/system restart, pulse your process and after that, resume your job form break point.
if you are processing emails, you should be using logging the messageid so you don't reprocess the same message.
as Microsoft is pushing cloud computing it should be no surprise, that on-prem computing lacks message queues. if you picked aws or azure, you could use an event hub, and server functions for this architecture.
Member
1 Points
15 Posts
Fault Tolerant Background Job
Apr 13, 2020 03:51 PM|hasnihaider|LINK
I have a service worker up and running in my ASP.NET Core 2.2 app. I am exposed to a problem that I want to solve which is to create a fault-tolerant background service.
I have a shared table on which two different jobs select and update data. Basically what we are doing is that we are syncing Office Calendar and Mail so the token has to be stored in the database. If any of the services fails to access the API they refresh access by using refresh token and then updates it in the database as well. Now there might be a situation that one of the services is working on the same row while other has updated the tokens in the database and we might end up in an infinite loop and both update the tokens over and over again. What should be the workaround to avoid such scenarios?
Thanks
All-Star
58174 Points
15647 Posts
Re: Fault Tolerant Background Job
Apr 13, 2020 05:56 PM|bruce (sqlwork.com)|LINK
use proper locking of the token use. They should be checked out for update, and checked in when valid. checkout should have timeouts in case the process that did the check out fails.
logic
Member
1 Points
15 Posts
Re: Fault Tolerant Background Job
Apr 14, 2020 01:20 PM|hasnihaider|LINK
Hello, Thanks for your reply. There is another thing that I would like to know since there is only one job that gets the data from the Office API and push that to database. I have seen service workers in Laravel where we have the option to queue the data into the Redis or Database and then there is another worker which processes that entry from Database or Redis. Do we have such a feature/framework/library for Asp.NET Core?
Moreover, We are near to MVP deployment and we upload the .dll files over and over again. This definitely interrupts the job in between so for example if there are 100 emails that I received from Outlook API now we processed 50 of them and then dotnet service on Linux is restarted. Since we have the previous deltaLink/Sync token because after the 100 emails are processed then that sync token is stored in the database. When the service is restarted the system is going to process those previous imported entries again so we gonna have redundant data in the database. How can we avoid such situations?
Member
10 Points
12 Posts
Microsoft
Re: Fault Tolerant Background Job
Apr 16, 2020 09:47 AM|Elendil Zheng - MSFT|LINK
Hi hasnihaider,
Do we have such a feature/framework/library for Asp.NET Core?
I think what you mean is some thing like a message queue and message push and pop function, It can seperate into 3 different senario
For the second question, the best way is to write your process status in some places and once there are service/system restart, pulse your process and after that, resume your job form break point.
All-Star
58174 Points
15647 Posts
Re: Fault Tolerant Background Job
Apr 16, 2020 10:24 PM|bruce (sqlwork.com)|LINK
if you are processing emails, you should be using logging the messageid so you don't reprocess the same message.
as Microsoft is pushing cloud computing it should be no surprise, that on-prem computing lacks message queues. if you picked aws or azure, you could use an event hub, and server functions for this architecture.
azure has the cool durable functions.