Performance Tuning with Serilog and Application Insights with .NET Core
One thing that always drives a developer crazy is when you have your code working in development, and it’s fast, but when it goes to production it suddenly throws some curveballs. Sometimes the response time is several milliseconds and other times it can take a few seconds. What gives!
Thankfully, if you’re hosting your application in Azure, you have the ability to connect your application to Application Insights, even though the option is disabled if you’re running on a Linux App Service Plan. You can do so through Serilog, a simple .NET logging platform with “structured event data in mind.”
Now, there are plenty of ways to setup Serilog (ok, mainly two ways), but the way I have most of my applications can be found in this Github Gist.
To do this, you need two NuGet packages:
When you install the first package, a couple of changes will happen to your _Layout.cshtml page so that Application Insights can do it’s thing. Since you’re using .NET Core, you’ll also want to make sure you have app.UseSerilogRequestLogging(); called in your Startup.cs class (within the Configure method).
In my scenario, I was calling an API hosted on the same server to create a short url for my bot, StanLeeBot. In development, when I would call the service I would receive a shortened url in a matter of milliseconds. But when I later went on to test it in Slack and other integrations, it began returning errors because some integrations, like Slack, expect a response within 5 seconds. Seems generous enough.
So into Application Insights I went. I drilled down into Performance, looked for the Operation Name, and found some interesting results.
Whoa there, so as we can see SQL is doing it’s just more than proficiently. What are those HTTP calls though? Well, when someone first creates an account via StanLeeBot to create a short url, an email is sent to them. I also like to receive an email when a new user signs up. Currently, I’m using MailGun’s API to send emails asynchronously but there’s still a significant hit with each call taking roughly 500ms each, sometimes more!
Solving this issue is still something I’m trying to determine. There are quite a few options, but most seem somewhat overkill for a simple application. One is to simply create a messaging queue service where I drop the emails in a table and they table is scanned every so often and then mailed out. Another option is utilizing Azure Cosmos DB with Change Tracking and an Azure Function. With this option, I believe, I’d only make one HTTP call to store the emails and then the function would handle the rest.
For now, at least I know what the issue is. The next stop is figuring out the best solution without going wild.
Thanks for reading!