Dear CIOs: I know why your environment is unstable.
The number one cause of instability in most database environments is SQL Server permissions. And specifically, the problem is that developers, QA engineers, application teams, managers, VPs, etc. have more access to systems than they should.
I’ve been a lead DBA in large shops for twenty years, and I’ve seen this more times than I can count. Let me walk you through a typical lockdown.
For a very short executive summary, skip to the bottom of this article.
1. The Lockdown Email
Often, a manager will get tired of all of the extra people in the database causing fires, and so he initiates a lockdown. He tells the database team: “Everyone who doesn’t need to be in the database needs to go. And everyone else should have only what they need to do their job.”
The DBAs don’t know what permissions everyone needs, so they send out an email to explain what’s up. “It’s time for a lockdown on database access. Please send what permissions you need to do your job.”
This is the beginning of the end. It’s this first step that causes the entire project to fail. We’ll get to why later, but for now let’s move on to the next step: the knee-jerk reaction.
2. The Knee-jerk Reaction
Now, everyone who got the email replies that they do indeed have the access they need to do their jobs. Steve – who does nothing but write reports – definitely needs sysadmin*. And Carol, who does quality assurance, totally needs to run, alter, create, and delete every job on the system…even though she only manages two jobs.
The rest of the staff are pretty much the same. Nobody wants the DBAs to trim their permissions.
*Sysadmin gives a user total control of the system, including the ability to drop entire databases and grant other users’ permissions.
3. The Heavy Hand
Next, the DBAs take a deep breath and explain that the lockdown is going to happen; everyone has to start being realistic about what permissions they actually need, and not just what they want.
It doesn’t matter how many meetings you hold, or how polite or rude the DBAs are about it, this is happening because the environment is just too unstable.
Now the rest of this lockdown will continue in one of two ways.
4. The Pushback
Option 1: Everyone ignores the lockdown and hopes the problem will go away.
Option 2: The users all whine to their managers and higher ups that their permissions are being taken away. “We won’t be able to do our jobs! The whole place will come to a screeching halt! Customers won’t get what they need because we don’t have the permissions to service them properly!” The overall message is that they need to be in production, and they need to have full access. Because, of course, they never know what will come up and they’ll need to react at a moment’s notice.
5. The Implementation
Option 1: The managers, vice presidents, and maybe even the CIO gets scared and scraps the whole project. They just can’t risk compromising production support.
Option 2: The DBAs are able to convince those in charge that the users will have the permissions they need and all will be well. The lockdown continues as scheduled.
6. The First Production Issue
If we made it all the way through to this point (following option 2 above), we still have the first post-lockdown production issue. That will test the newly locked-down environment.
When the issue hits, some developer encounters a minor issue with his permissions. This is the perfect opportunity to completely blow the situation up, so he immediately runs to the VP and screams that customers are suffering! That the entire company is at stake!
The VP then freaks out, because this developer has been there for years and he’s never let them down before. He’s got to be telling the truth (and not exaggerating at all). So, the VP orders the DBAs to reinstate the developer’s sysadmin access.
The rest of the security lockdown soon crumbles, because once one exception is made, the rest will soon follow.
I’ve seen this whole thing play out many, many times.
The Lockdown Analysis: What went wrong?
That’s a cautionary tale if I’ve ever seen one. Now let’s take a minute to talk about what went wrong.
Remember when I said that the DBA’s announcement email in step 1 was the big mistake? Here’s why: The DBAs and their manager did not properly inform the vice presidents and CIO. I have gone both routes, and it almost always fails when I do it as outlined above. It always succeeds when I insert this following step.
Step 1A: The Upper Brass Email
Before sending out the email or letting anyone else know they’re thinking of trimming permissions, the DBA should meet – if possible, all at once – with all the managers, the VPs, and the CIO (you). At that meeting, the DBA should tell everyone what they’re planning to do and why. Then they must explain that when they send out the email, there will be plenty of pushback.
This meeting’s talk should look something like this:
Developers don’t want to lose the access they’ve had for years, even though they never use most of it. So they’re going to come up with all kinds of knee-jerk reactions. They’re going to come to you guys and say that customers will suffer, and that deadlines are going to slip, and the quality of the company will generally be compromised.
It’s okay. None of that will happen.
All of that posturing is really just a tantrum, because people don’t want things to change. And we don’t give in to tantrums.
The success of this is going to rely on how good the information they give us is. If they tell us all the permissions they need, then we’ll probably get it right the first time. If they don’t, then there may be a couple of minor security hiccups that cause us to go back and give them another permission. It only takes a couple minutes, and they can wait that long.
They’ll tell you that the issue is so important that it can’t possibly wait that long. “I have to handle it right now!” they’ll say. But think about that: What if an issue comes in while a dev is in the bathroom, or at lunch? It would have to wait for at least a few minutes, and probably a lot longer. And this is no different.
I just wanted you to know that we’re not only trimming permissions, but that the guys are going to come to you with doom and gloom and pleas for emergency permissions. And you need to know that now so you don’t fall for it. It’s nothing but a tantrum and this needs to happen.
Right now we’ve got three major production issues a week, and we can trace them all back to someone making a mistake with something they shouldn’t even be able to do. These permissions are really important to them, but you know what? In a couple weeks they won’t even care anymore. They’ll be able to do anything they need to, and they won’t know the difference.
The only real difference is that after the lockdown, if they do something they shouldn’t, whether on purpose or by mistake, the system will stop them.
That is what your DBAs should be telling you. And that is what you should tell yourself if they don’t.
The Dangers of an “Unlocked” Shop
We alluded to it earlier: production issues happen when someone who shouldn’t has access, makes a mistake in production. I’ve seen dropped tables, altered schemas, renamed procedures, changed permissions, and even deleted databases in production that came directly from some dev or report writing mucking around in production, where they have no business being. And that’s just the tip of the iceberg.
You should also conduct this lockdown because of external threats. SQL Server permissions are nothing to scoff at. Why do you think hackers steal data from companies so often? Because too many people and applications have permissions they shouldn’t have. Hackers are a very real threat and once they get into your environment they can only do what you’ve already given them permissions to do.
You can’t look at it like you’re giving your own developers and applications SQL Server permissions. You have to look at it like you’re giving thousands of hackers those permissions. Because if they get into your server, they’re going to use those permissions against you.
Trust Your Security Experts
Stop letting your developers run your shop. Trust your DBAs. Because here’s a secret that your developers aren’t going to tell you: they don’t know anything about security or permissions. Your DBAs (hopefully) do.
Even though you’ve known this developer for five years, that doesn’t mean the developer knows what he’s talking about. Remember that the loudest wheel gets the grease. When it comes to production privileges, the developers are always the loudest wheels. But instead of greasing them with tough love, you grease them with extra permissions. And that’s why your environment is so unstable.
Executive summary: SQL Server shops are unstable because too many people have high level permissions. Have your DBAs conduct a lockdown, but first, warn all of the managers, VPs, and the CIO that people are going to complain and predict doom and gloom. There is no doom and gloom, just a lot of tantrums and dangerously unsecured servers.
Download FREE Maintenance maintenance modules: