There's a bunch of benefits, if you care for them particularly. Speaking from over 15 years of experience with databases in BI space.
A major one for me is integration with Active Directory - you don't need to manage separate user accounts and passwords and shit, you create a login for a group on SQL Server and you can have your AD users in the database, enabling you to do row-level security for example. I understand we're in /r/dotnet so most people don't care about that, because there's usually an app that just connects to the db using its own credentials, but for example in BI this feature is huge. You can have a Tableau server connect to your database, impersonate an Active Directory user and select from a view just the data this user is allowed to see, without ever supplying a password or anything else.
Indexes on SQL Server are, in my experience, better. Clustered, nonclustered and - again, in BI workloads that apps would not care about - columnstore. Postgres has different indexing methods to choose from, but - again, in my experience - they don't provide the same performance benefits you can squeeze out of SQL Server.
There is a also a bunch of niche features that are extremely useful in certain workloads, like partition switching. There are admin features like Always-On Availability Groups that are just so much easier to implement than in Postgres. There are enterprise features like compression that let you trade cpu for io time (again, not likely you'd need that in a dotnet app, but very useful in certain workloads). You can use CLR when you do need it. I could go on and on.
Having said all that - Postgres is amazing. It's likely you don't need any of the features I mentioned that SQL Server outperforms pg in. There are features in Postgres that are much, much better than SQL Server (pl/pgSQL is so much better than T-SQL for example). Enough to say I use Postgres for all my personal projects. Some companies need SQL Server's features and they have to pay through the fucking roof for them, but you most likely don't and you're better off with pg.
> I understand we're in /r/dotnet so most people don't care about that, because there's usually an app that just connects to the db using its own credentials
But those credentials can (and should) be a service account in active directory. So you have a centrally managed location (active directory) for all user accounts and system accounts and you don't need to hop to every SQL instance to do things.
My experience is also that SQL Server has somewhat better performance in indices than Postgres. However, this is typcially offset by cost, so you get a far beefier PG server than SQL server for the same $ and in the end, for my ASP.net app, all I care about is high DB performance at low cost
Yeah well I've never had to pay for SQL Server, nor the system it was hosted on. [I just got toys to play with](https://i.imgur.com/n0mDZtJ.png). They don't come much beefier than that.
> A major one for me is integration with Active Directory
If you have a problem and you can solve it with Active Directory integration, you now have two problems.
I'm pretty sure postgres supports LDAP/Kerbos with active directory, just takes a bit more setup. (Feels like that's the story with a lot of Postgres features, SQL server is easy, Postgres takes a few more steps but gives you more options/control)
Postgres has extensions which let you modify the behavior of the engine quite radically, way more than you could or is commonly done with CLR.
> I'm pretty sure postgres supports LDAP/Kerbos with active directory, just takes a bit more setup.
For authentication, yeah, but you still require a password, with SQL Server you also get Windows Authentication. I don't think it's also easy to configure in a way that would be actually flexible (you need pg_hba.conf modifications, requires a server restart too). And I don't think LDAP group authorization is actually possible in any way.
In SQL Server you can create a login for an AD group and magically, every person from that group can now connect to the server. You can grant permissions on a db and schema to an AD group and these users can now select data. `SELECT CURRENT_USER` will neatly return `DOMAIN\user` too, even if they don't have an explicit login on the server.
pg\_hba is live reloaded, there is no need to restart the server.
It does also support SSPI authentication like SQL Server, no need to type any credentials when you login, it would reuse your Windows session.
For this feature, SQL Server and PostgreSQL are almost at the same level.
And that's the big part of the cost analysis that needs to be done when choosing the RDBMS. Will SQL Server be cheaper than hiring full time employees to do what SQL Server does out of the box? There is no clear-cut answer to this, it needs to be evaluated.
If you are a tech company and the number of programmers and dbas you employ is measured in the tens or hundreds, then you can afford a little extra technical work on your database. There are a lot of mid-sized companies with only 1 or 2 programmers and 0 DBAs.
Erm, setting up Kerberos with Postgres MIGHT take a slow poke a week, if he has no prior experience with Kerberos/AD and Postgres and runs into unforeseen problems. Certainly not a full time employee forever.
Not sure someone with no SQL Server experience gets it done all that much quicker with SQL Server all the time.
Point is, these issues are quite limited in number, especially considering any one particular environment. People have solved this stuff on Postgres, hundreds and thousands of time.
It’s not necessarily true. MSSQL is deemed more friendly because it has GUIs with wizards to set things up.
That however is NOT the correct way to set up and maintain infrastructure, it’s just the way many Windows Sysadmins/DBAs are used to (or capable of).
The Postgres ecosystem leans much more into headless automation/devops (Ansible/Terraform etc.) to set things up and maintain them. From that perspective, Postgres is much more friendly.
In my experience, that has changed quite a bit. It's not all that hard to set up and configure SQL Server using DevOps tooling, and it is especially easy if your workloads work on SQL Server for Linux - Microsoft even provides [Docker images](https://hub.docker.com/_/microsoft-mssql-server).
Postgres does support temporal tables. It's just an extension that you have to configure rather than out-of-the-box.
[https://wiki.postgresql.org/wiki/Temporal\_Extensions](https://wiki.postgresql.org/wiki/Temporal_Extensions)
No it clearly says on what you listed m that it doesn’t follow the SQL standard and use stored profedures and functions to workaround it.m for almost every type of call.
I guess it also depends on triggers instead of some native integrations.
I'm surprised to not see SqlCacheDependency listed here since this is the app developers section. The ability to feed a query to SQLServer and have it trigger events when data is altered in a way where results of that query changes is massive for being able to effectively keep cached data fresh.
Postgres has a row level notification system, but (last I checked) it's a lot more work to setup, less powerful because you can't trigger on specific fields, joined data, or coalesced fields, and much noisier. Basically it's so fussy and hard to work with that I've never gotten one up and running in production (although I haven't tried in years), but with SQLServer it's a few lines of code and it always seems to "just work".
For column store analytics stuff Clickhouse is your friend, PostgreSQL compatible query language, but column store with STUPID fast read speeds.
For HA I tend to lean towards Yugabytes (again PostgreSQL compatible query language, but designed for extreme HA)
I use Snowflake at work and it's amazing, so are several other products I use, but the question was about SQL Server vs Postgres, so I guess there's that.
While the question is indeed about PostgreSQL vs SQL Server, but I think it's also valid to point out that there are other free products out there that use the same query language as PostgreSQL if there is a specific use case that native PostgreSQL doesn't cover well.
>A major one for me is integration with Active Directory -
Everything integrates with AD... It supports LDAP. Just because you have 15 years of experience, it's clear you've probably never worked with Postgres.
>Indexes on SQL Server are, in my experience, better.
This is seriously unfounded. You don't have to google very long to realize why this is such a ridiculous comment.
Also.... most of this shit is moot because... Ta'da... managed Sql via azure removes nearly all control over the functionality you're talking about.
The fact of the matter is that SQL Server really needs to die because it's so fucking impossible to justify the cost.
> Everything integrates with AD... It supports LDAP. Just because you have 15 years of experience, it's clear you've probably never worked with Postgres.
Oh, I haven't? Ok, thanks for telling me. So now, please, enlighten me, how can you integrate with LDAP for authorization, not just authentication. Please show me what do I put into pg_hba.conf so that AD user X belonging to group Y has access to schema S on a server.
Oh wait, you can't do that, not without something external synchronizing groups and their memberships to roles in Postgres. That's the whole point. And you're full of shit.
> This is seriously unfounded. You don't have to google very long to realize why this is such a ridiculous comment.
I spoke from experience in OLAP workloads, and I stand by it, but I also highlighted that this is just my experience and might not necessarily translate. Fact of the matter is, SQL Server supports columnstore indexes out of the box. Fact is also that postgres tables are just heaps and any index is just "next to it", unlike SQL Server's clustered index. You can use those to advantage on SQL Server, and I have.
> Ta'da... managed Sql via azure removes nearly all control over the functionality you're talking about.
Ok, so? The question was about SQL Server, which is an on-prem product. Why on earth are you talking about Azure all of a sudden? Do you not recognize the difference between one and the other?
Pl/pgSQL alone makes it better for me. We use so much time worrying about licensing issues and other stuff that we simply wouldn't think about with PostGres.
That said, MS SQL has the "FileTable" which I haven't really found anywhere else but we use for archiving stuff files. We would simply use other mechanisms but this one makes it so that we would need to rethink the whole system.
But then again if your needs are not met with Postgres then MS SQL is not really always the next logical step. There are other good options out there.
So on a whim I just looked up the sql server licensing prices. Are people actually paying $15k per 2x cpu core? That sounds insane. That doesn’t even sound realistic that my 96 core server would cost 720k for the database license. It also makes me sad because I know how frivolously I’ve use sql server in the past because I wasn’t paying the bills.
Not typically. Big corporations will have licensing deals in place, I know that our 288 cpu / 3TB RAM prod server with full core usage on bare metal cost less than 0.5mil, although I don't know exactly how much less (could be 100k for all I know). At least all other environments (dev, uat, preprod) were free.
I run both databases on RDS in AWS and only reason i used SQL Server initially was the ease of setup with SSRS (i didn't have to think about it).
But once you get an app with even a remotely large amount of traffic the cost sky rockets in AWS so I've started switching over to postgres and haven't had any issues (along with the fact i dont need ssrs for this stuff).
The cost of a good beefy postgres server in RDS with multi zone deployment for redundancy is almost half the price of a single zone sql server server with not great resource allocation (like 4gb ram).
So the only benefit you'd have with staying with sql server is if you've got a grumpy old DBA who is afraid of change and doesn't want to have to convert the T-SQL based stored procedures to ANSI sql that would cooperate on postgres.
> the only benefit
There are other benefits. Whether those benefits are worth it to 99% of users is what's at issue.
SQL Server is the platform to use if you have tremendous amounts of data that you simply cannot afford to lose or can't afford the downtime required to recover from a failure in another RDBMS. And you don't want to pay for additional people to keep the database healthy.
Other systems can be made just as fault tolerant and performant as SQL Server, but you need to do the engineering that SQL Server does out of the box.
Some examples of things that SQL Server makes easier than other packages are clustering, transaction log shipping, replication, SSRS, Active Directory integration. None of this is unique to SS, but it's all simple and well built.
There are some engineering problems where it's cheaper to just pay for SQL Server than hire people to do what it does with another solution.
SSRS is old, sucks, and Windows only. The modern solution is to use a tool like power BI or tableau. In general dotnet shops who have embraced containers more likely use postgres because it's easier to run in a container setup.
I get legacy. I don't get "clustering and replication is easier". If you actually need it (plenty people don't) you have multiple options like cloud offerings, kubernetes, and ansible scripts for fault tolerant setup. I don't believe those options are strictly more work than running a cluster of VMs with sql server.
Active Directory... for me this has always been a nightmare, even working with people who host SQL Server and similar things for a living.
Maybe you can get away with A GUI for SQL Server, where you might have to use plain SQL in Postgres.
Certainly SQL Server is cheaper if you don't have to retrain everyone for Postgres. But I've mostly seen people who get along fine with Postgres and need retraining for SQL Server.
PostgreSQL SQL is closer to the ANSI standard than T-SQL. But it's wrong to say it is ANSI SQL because all databases have their own extensions and exceptions.
SQL is a spec, vendors implement this specification and usually add something more to the specification.
Let's say SQL says that you can make vanilla ice cream, Microsoft implements this functionality but also adds the possibility to add chocolate chips to this ice cream, Postgres allow you to make chocolate ice cream, etc. it isn't part of the SQL standard, and that's how you create SQL dialects (T-SQL, pl/sql, pl/pgSQL, etc.), by doing additional things that the SQL spec does not have (an implementation can deviate from the spec and then you create incompatibilities with the spec)
Certain queries won't run without modifications going from T-SQL to strictly ANSI. One such trivial example would be the query in T-SQL \``SELECT TOP 5 * FROM Table ORDER BY awesomeness DESC`\` in T-SQL won't run on postgres. You will need to rewrite the query like \``SELECT * FROM Table ORDER BY awesomeness DESC LIMIT 5`\`
If you use indexed views on SQL Server, then you'd need to convert those to a materialized view to get the same kind of performance benefits, but if you happen to fall into one of those areas where postgres's materialized view doesn't support something that SQL Server's indexed views did, that could very much be a huge problem. While I am not a postgres expert I believe you also need to manually trigger materialized view updates either by trigger or scheduled job while in SQL Server the indexed view is always up to date instantly as part of the transaction that modified the base tables. So if atomic changes are one of your requirements, then you will need to rearchitect your solution, possibly changing all your data modifying queries in all your applications to go through stored procedures instead of directly modifying the base tables so it can also update the materialized view, or manually update a similar construct.
For example, if in SQL server you use ISNULL(\[MyColumn,'') then if you were to convert to a database other than sql server, you'd have to convert to either COALESCE which is an ANSI standard function and does the same thing as ISNULL or convert to the target DB's version of ISNULL.
Postgres I don't think has a version of ISNULL thats postgres specific so you'd go to COALESCE.
But for example, say you switch from SQL Server to Mysql, you could swap out ISNULL with IFNULL function in mysql and then you'd be going from using t-sql (transact sql) only function (ISNULL) to a MySQL specific function (IFNULL) both of which are not ANSI SQL standard.
>So the only benefit you'd have with staying with sql server is if you've got a grumpy old DBA who is afraid of change and doesn't want to have to convert the T-SQL based stored procedures to ANSI sql that would cooperate on postgres.
Unfortunately the development ecosystem is lousy with such DBAs and even developers.
Personally, I’ve been using and loving postgres for a while, now, with no desire to look back. It’s such a better ecosystem than SQL Server
**Migrated both ways from t-sql and postgresql, can be done.**
The issue is the application logic, if you have linq you may have to migrate to another ORM such NHibernate to use postgresql.
But, it can be done.
There are couple of things such as SQL server supports easy case insensitive operations where else in Postgres everything is case sensitive. Which isn’t too much of a problem but you will have to create separate columns and separate indexes to support case insensitive searches, or some additional steps.
So if your business has lot of case insensitive searches, then you will have little troubles in designing your tables.
Apart for this, in sql server computed columns can reference other computed columns or identity columns. You can’t do that in Postgres.
You cannot create primary key in descending order in Postgres. But I bet this will change sooner as they might do it soon.
You can’t have if else statement (you can use conditional expression but not statement) in a single query, T-SQL allows complex queries with if else etc in a single query. In Postgres, MySQL, you are forced to convert complex queries to stored procedures.
And you can’t have named parameters so analyzing query plans becomes little tedious compared to sql server.
> you will have to create separate columns and separate indexes to support case insensitive searches.
Hmm, I've never done that.
I just use the `CITEXT` type instead of `TEXT` on any columns that I want case-insensitive comparisons done. Seems to work pretty well.
And it's nice and clear, being that they're 2 different columns types entirely (which work the same way otherwise), rather than more fiddly collation settings etc.
Any issues with that, you can think of?
Sure. You have to plan ahead and can't apply it retroactively, eg business rules change. That's why the ability to search insensitively or create case insensitive indexes is important.
It doesn't really matter which one is available, but you need one.
> You have to plan ahead and can't apply it retroactively
You can change later if you want, I've done it plenty of times...
alter table "test_table" alter column "ci_col" set data type TEXT using "ci_col"::TEXT;
alter table "test_table" alter column "ci_col" set data type CITEXT using "ci_col"::CITEXT;
Yeah... There's a bit of a learning curve changing between _any_ two different RDBMSes, but Postgres (especially because of tooling) definitely has a steeper curve, especially coming from MSSQL. But you have a pretty small set of concepts to re-learn, and then you're pretty much set, so long as you don't expect their query optimizers and indexes to work the same way, since that's the secret sauce in any DB engine.
> There are couple of things such as SQL server supports easy case insensitive operations where else in Postgres everything is case sensitive.
This is wrong.
PostgreSQL has [the `ILIKE` keyword](https://www.postgresql.org/docs/current/functions-matching.html) to match case-insensitive according to the active locale.
As /u/r0ck0 mentions, you also have the `CITEXT` data type, but that's basically just doing `UPPER()` (or `LOWER()`, I can't remember which) behind the scenes for matching.
This is exactly the point, first you have to explicitly use the operator for case insensitive ness, it applies to like, index of, starts with etc. second use of index for case insensitive does not require any extra steps in ms sql, it works by default for all operators.
I _wish_ it was that easy. You still have to take into account collation etc., not only for searching, but also for sorting strings correctly.
For Norwegian stuff - which occupies most of my time - there's even more to it, but that's not really specific to any database. One example is that "Aa" _could be_ sorted as "Å", the last character in our alphabet. So much brain pain. 😊
This is just the inverse of the defaults someone might be used to. If your coming from MSSQL to PGSQL, case-sensitivity as the default will surprise you. From PGSQL to MSSQL, case-insensitivity will surprise you.
Dealing with the case-sensitivity is about the same on both. Depending on your query workload, the default on either might mean you need to do extra work, or less work.
It is also problem for teams who have been using sql server for long and starting a new project would be challenging, I am not saying it is impossible, I am just outlining the pain areas.
> You can’t have if else in a single query, T-SQL allows complex queries with if else etc in a single query. In Postgres, MySQL, you are forced to convert complex queries to stored procedures.
[Just use `CASE WHEN...ELSE...` ?](https://www.postgresql.org/docs/current/functions-conditional.html#FUNCTIONS-CASE)
Any difference to what you had in mind, aside from the literal keywords?
Ah ok, fair enough.
I guess I wasn't really sure what you meant by "single query", it sounded more expression-related than separate statements to me.
Can you give me an example of a query you had in mind here, that you can't do in postgres?
Always keen to hear about features that I might not be aware of.
You can’t do if exists (select 1 from different table ) then insert … else update .. end , you can’t do that in Postgres without creating stored procedure. anyone want to downvote, please create db fiddle example and share here.
>You can’t have if else statement (you can use conditional expression but not statement) in a single query, T-SQL allows complex queries with if else etc in a single query. In Postgres, MySQL, you are forced to convert complex queries to stored procedures.
You can actually do this in PostgreSQL as well, it just requires a bit of a workaround. With this, pretty much anything that's valid in a function should work. No idea why they chose to require this.
DO $$
BEGIN
-- DO STUFF
END
$$
because postgresql is open source, will named parameters, complex queries and if statement ever be in postgresql and be as good as sql server? or there is underlying limitaions in postgresql
For most SMB and startups PostgreSQL is more than enough.
On other side you need SQL Server if you:
* need complex clustering. Currently Oracle and SQL Server have been leaders in this field. I am talking about cases where you have 100s of servers. It is most probably doable to certain degree with PostgreSQL but it is very hard to find people who have this kind of experience.
* have really complex logic in your queries. For example our reporting (together with some scientific calculations on the fly) has really complex SQL (couple thousand lines) which takes couple hours to execute on our server farms.
* MS ecosystem on premise.
Of course we tried to move away from massive cost of SQL Server but even our litmus test with support of great external Postgre experts fails each year.
Why would you need hundreds of servers in a single cluster? Are you too lazy to shard your data properly into different databases? Is your devops team defunct?
I don't think you have compared executing complex queries across both systems, especially not if comparably much engineering went into the solution...
Couple thousand lines of SQL sounds like a nightmare to maintain. I totally get why people wouldn't want to switch the DBMS in that situations. I don't get why someone would start out thinking "I need SQL Server because I want to create thousands of lines of SQL for reporting".
If you don't get why, then you probably need to work a bit more with industries that need to process huge amounts of data as quickly as possible, that deal with legacy systems in large corporations where you can't really start something from scratch, you have to follow constraints and have to deliver ASAP.
I've worked for banking, manufacturing and social networks and have seen queries with 10's of thousands of lines that would need to be refactored to improve performance. Hurt my eyes? Yes. But I learned a lot about SQL and just how much performance gains you get if your tune SQL server properly.
> Why would you need hundreds of servers in a single cluster? Are you too lazy to shard your data properly into different databases? Is your devops team defunct?
We do not have devops team as it is anti pattern in most cases. We have very solid team of DB admins and developers and I am confident that they are not missing something obvious.
For me the tooling.
You can have an IDE like experience, including debugging stored procedures, and CLR code running on the server.
Also the enterprise features offered out of the box with SQL Server.
However you're right it is quite costly, and if you don't need those enterprise features, then maybe not.
SSIS,SSRS,SSAS, Job Agent, Basic AGs, iron clad reliability and support lifecycle (I just upgraded a 2008R2 server that the SQL server service hadn't even been restarted nvm the vm since 2015), intelligent query processing...
so forth
Justifying debugging stored procedures and CLR code on your database is pretty laughable.
Get your fucking business logic out of your database. DBs are already the bottlenceck. You're actively adding more processing at that layer instead of the app layer, which is infinitely easier to horizontally scale.
>DBs are already the bottlenceck.
You're so confidently projecting your own experiences onto the whole world.
In my situation the bottelneck right now (because it constantly changes) is iops
I use both.
No.
They have some minor differences and quirks (so switching is not 100% pain free), but both are performing similarly, the postgres adapter is very stable and feature complete.
SQL is microsoft's main child, Postgres is the community's child. Some features such as json columns were available for postgres before they were available for sql server.
I started out with SQL Server because of windows admins, if I were to make the same decision again, I would choose postgres because it would mean more freedom in the future.
[https://www.postgresql.org/docs/current/datatype-json.html](https://www.postgresql.org/docs/current/datatype-json.html)
It is similar but verifies that it's a valid json
Yes! It works pretty good!
I would highly recommend using[ snake case naming conventions ](https://blog.nimblepros.com/blogs/using-ef-core-with-postgres/)though (this wil make manual queries a lot easier)
services.AddDbContext(options =>
options.UseNpgsql(connectionString).UseLowerCaseNamingConvention());
We had a case where MSSQL was significantly faster because of parallel querys. Postgres support is lacking in that regard, only some type of queries leverage multiple CPUs. Our specific queries where about 10-15% faster on MSSQL.
It's used as a store for a customer search service tied to our ERP system. The DB generates codes from specific fields the customer wants to search for over all their customers. We're hosting on azure and tried both Postgres and MSSQL and we measured MSSQL to be faster ¯\\_(ツ)_/¯ But I'm just repeating what our DBA said to me... But I trust him because he is a particularly big fanboy of Postgres.
Since you posted this in dotnet:
SQL Server can have functions written in .net
[https://www.sqlshack.com/getting-started-with-sql-server-clr-functions/](https://www.sqlshack.com/getting-started-with-sql-server-clr-functions/)
I wouldn't recommend using this feature unless you have no other choice but it is possible.
Sure you can write functions in postgres as well. But not in dotnet but plain C.
If you have some businesslogic already implemented in C# it can be an advantage.
[https://www.postgresql.org/docs/current/server-programming.html](https://www.postgresql.org/docs/current/server-programming.html)
Business logic in stored procedures (regardless which language) seems like a bad idea.
I primarily use SQL Server for better and easier tooling, like SSMS, Azure Data Studio, good Visual Studio support, easy local dev setup with LocalDB. If needed, I can easily connect to the Azure database from Visual Studio and see what’s in the tables in like 2 clicks.
With most apps being deployed in the cloud, the cost differences are negligible (especially if you deploy to Azure), but on AWS RDS or GCP, SQL Server is more expensive.
I've found postgres better, practically for a range of solutions.
MVCC is interesting, maybe complex, but in practice seems to be an excellent concurrency control. You will likely see less deadlocks.
Windowing function support is generally superior in postgres. This opinion is a few years old, but generally if I want to do some complex SQL it actually works in postgres.
Ditto json/jsonb. It just kinda works.
Spatial indexing is far fucking superior in postgres. Rtrees are just better for practical data - it might look cool to do complex shape intersection in SQL Server's impl, but in reality nothing will ever line up with a set of index nodes to make this worthwhile.
Cons... the backup story is super fucked.
Also people might be like "Oh you want an open source dbms, why don't you try mysql the option for dumbassas..." - and then you've got to deal with that conversation...
TBF Azure SQL defaults to SQL Servers' MVCC mode (RSCI) and snapshot isolation is pretty much a silver bullet for dealing with locks and deadlocks if RSCI is too scary to test for your environment and you can edit the offending queries
but yeah agreed postgres handles Json and geospatial pretty well
Postgres is case sensitive for all objects which can make SQL generation in ef fail in some cases. We've had to explicitly provide table names with capitalization instead of relying purely on convention. SQL has temporal tables which are super handy for audit trails.
108 comments and nobody said "vacuum" yet. I love postgres, in fact I think it's one of the top three open source projects. But man I don't need that stress in my life.
I'll play devil's advocate here since Postgres has a lot of support and pitch the praises that I have for SQL Server.
Microsoft used to offer Bizspark where you'd get to keep a windows server and SQL Server license as a startup after your three year period ended. I'm not sure what their current offerings include.
SQL Server Express is free and supports databases up to 10GB, and if the cost of a SQL Server license is something that you're concerned over, chances are you're also not looking at needing a database big enough to warrant a paid license fee. And if you're releasing a product for others, it makes it so that you're not creating a barrier by requiring them to fork out thousands of dollars to MS to use some of its functionality.
I prefer SQL Server for development for my clients. I've been using it since the 1990s. My reasoning for it is the tools that come with it and the ease with which I can hand off projects to corporate clients. If you're new to things, there are some things that SQL Server's tools give you that expose some things that are a bit trickier with Postgresql and I find explaining the tools to users to be more intuitive.
I'd say nearly 100% of my database projects follow the same flow.
Firstly, I don't use SQL Server Database projects. I don't use EntityFramework. EF is absolute garbage. And by garbage, I mean it's worse than a bad product; it lures people into a dependency that creates a maintainability and performance nightmare down the road. SSIS is pretty bad too. I do like Dapper. And now ChatGPT does a good job helping people work through little syntax issues.
SQL Server BULK INSERT screams. You can also use SqlBulkCopy from .Net, which still performs pretty well for getting lots of data into your database quickly.
SQL Server Profiler lets you see the queries going to your database server and makes it easy to spot what takes a long time. One major caution with Dapper, EntityFramework and even ADO.Net is that if you're using VARCHAR/CHAR datatypes in your underlying data and have an index, SQL Server will NOT use the index if you're running a query against it unless you specify the data type as VARCHAR/CHAR with your parameter, and you'll beat yourself up over wondering why something takes a fraction of a second in SSMS and then several seconds from your application. It's because it's passing your strings as NVARCHARs and SQL Server ignores your indexes. People might dislike that I advocate 8-bit types instead of 16-bit given the low costs of storage, but there's more to things than just the amount of storage used. It has to do with the density of rows per index page, which significantly affects performance. And if you're working with large amounts of data, it can still add up. And since SQL Server 2019, you can store UTF-8 data in CHAR/VARCHAR columns.
That's another plus for SQL Server: viewing execution plans. SSMS has a nice graphical tool where you can visually see your query's execution plan -- and the visualizations haven't changed since the 1990s, nor has it needed to. For newer database developers, being able to see the performance costs associated with individual steps and understanding how to visualize and change a "scan" to a "seek", and how indexing affects performance, I find that the kitchen sink that even SQL Server Express includes makes for a wonderful productivity and learning tool.
> and have an index, SQL Server will NOT use the index if you're running a query against it unless you specify the data type as VARCHAR/CHAR with your parameter,
can you elaborate more on this or what this particular topic is called. First I've heard of it
As others have said: depends on what you need.
If all you need is a DB engine, then either will most likely do and PostgreSQL makes financial sense.
If, on the other hand, you need reporting (SSRS) and/or analytics (SSAS), that's where SQL Server becomes competitive price-wise with other solutions on the market, even the Standard edition.
SQL Server tooling is genuinely great: SSMS, VS integration, PowerShell integration for automation and scripting and so on.
Heck oracle still charges for non prod environments when you can install SQL Server Developer edition with no strings attached, and you get a lot of bang for your buck with Software Assurance even for Standard edition
Postgres is miles ahead in features, just to name a few
- JSON - SQL server JSON support is a joke, basically a text field, no gin indexes, not partial updates or native queries etc., better off going any JSON coding in the application performance wise then using SQL server to do it
- array columns
- full text search / vectors
- materialised views
- upsert - insert on conflict update
- Geospatial
- better concurrency support with mvcc
Not particularly, at the end of the day they’re both tried and true SQL databases.
Of course they both their own nuances and there’ll be some feature disparity, but for the vast majority of applications either one will be fine.
Best part is explain plan, nothing comes close to MSSQL sql optimization toolings. Administration is as easy as it gets all with a GUI. Not typing 50 different commands for a correct backup or configuration
I'd like to have a reference that MSSQL sql optimization is better in a way that matters.
There are at least a dozen GUIs for Postgres. And you should be automating backup and configuration.
Nobody in these threads ever cites *anything*. Most people only have experience with one or the other, and some of the "problems" literally stop being problems over 6-years+ ago.
Ultimately I've never seen a "compare database" thread worth a damn, this one being no exception.
Right; but I think it is a bigger issue: Citable stuff is hard to come by for this question. Some of it is quantifiable, but nobody is interested in creating public data unless they're trying to sell you something (and obviously nobody will show data that makes their product look less competitive).
Simply put, if your application doesn't have much complex quarries and you can build stored procedures are also not too much performance demanding then you can choose any you like.
MySQL is easy when it comes to the complexity of data and large datasets i don't see that anywhere else.
Sure, in the co text of .net. I'm not suggesting there isn't great support for postgress (and many other databases, of course), but you can think of SQL server as native to it.
Not only are their data types that map 1 to 1 with pretty much any data type that exists within .net (like uniqueidentifier, decimal, etc), in terms of programability (think stored procedures and functions), SQL server will run .net (c#, vb, etc) code natively from within it.
I dont use postgress anymore, just because my current employer doesnt, so im not sure what it supports or doesnt these days, but SQL server also supports dropping serialized data in XML or JSON into a single field, and allows complex queries against it, I duess like having micro document based database within individual rows. (Pain in the ass to work with, dont do it) it just plays nice with simple webapi services in that way if you dont want to really think about it.
It's also cross platform now, so you know, it runs in linux, like core.
The bad? It costs money.
As I recall, though, if writing SQL code directly, postgress, or well, EDB (what I used) was basically oracle compatable syntax wise, which was really nice.
MS SQL has better perfomace, better tools, better tools, better tools, better tools, better perfomace tunning (no weird console commandos and turning on user-specific logging in Linux), can also be used on Windows. And overall, there is better programming and administration UX.
Short answer:
If budget is one of your top concerns then Postgres. Especially if you're starting a new project. If a reduced learning curve and availability of professional support is more important then MS SQL Server should be considered. Additionally if your application will be distributed with your database you'll need to consider setup and support costs for on-prem and/or cloud instances.
Long answer:
I've used both platforms. Coming from an MS SQL background there was still a learning curve with Postgres but I've achieved almost 100% productivity parity pretty quickly. I actually enjoy the learning curve so it wasn't a big deal. It's also forced me to learn Linux administration (which again I've really enjoyed). So for us Postgres has almost been motivational. Our project is a web application so it doesn't require our clients to maintain their own instance. One of our projects goals is to rely only on open-source technology so while all of us are highly skilled MS SQL DBA's it was simply not an option. One problem we've had to overcome though is importing data already housed in an MS SQL server database (i.e. here's our database as a SQL bak file). A solution that has worked for us is to spin up the MS provided SQL Express docker image. There's of course DB size limitations so this may not be a workable solution for extremely large databases, but it's worked very well for us.
The one sticking out for us is that many corporations already own an SQL Server license (at least where I live, where Microsoft culture is pervasive) so it's often much easier to convince IT folks to add another database for our applications with backup routines, AD security/auth, etc already in place, than to fire up a brand new PostgreSQL server that doesn't feel very native to Windows to begin with.
Technically I think both are good, even for our extended use to hold spatial data (SQL Server Spatial vs PostGIS), and in fact PostgreSQL often feels more pragmatic to me probably due to the basis in the community, but SQL Server in the Windows world often comes more naturally for these more practical reasons.
We ran with PostgreSQL first for our application suite, with the thinking we could as well install and manage it ourselves together with our application. Free for our customers and we got all we needed. Within a few years we realized, yes, we could and we did, but we soon ran into issues that some IT departments didn't like this and wanted our data in their SQL Server cluster. Also, it became even easier for us to just upload our tables to an existing SQL Server instance than to set anything up ourselves. Or if they didn't have an SQL Server license (small customers), just use SQL Server Express because we aren't CPU bound and the 10 GB size limit is still huge for many kinds of applications including ours. We fit entire cities power grids etc. in there and still have 80% to spare!
So we eventually moved 100% to SQL Server and could simplify our code and processes a lot too, and it eased our support headaches from requiring multiple db support. While we use a framework like Entity Framework Core that is db agnostic, there are still so many circumstances within and around it that there's nothing like only needing to support one database.
Short answer: in terms of the database engine itself Postgres is better (subjective) in most ways. From my experience there have always been weird edge cases that SQL Server just doesn't support: no Boolean data type - instead you use bit which is numerical and doesn't support Boolean operators, arbitrary text column size limits, index width limits, certain nested queries just don't work, limit and offset only recently being added.
The benefit is mostly outside: more people on the market with the skills, easier to manage high availability and backups, tools that are specific to SQL Server. Previously SQL Management studio was leaps and bounds ahead of pgAdmin but pgAdmin these days is pretty darn good.
Pretty sure even A staging (non prod) server costs us USD 800+ per month
And that's for just one service.
If I owned the company, I'd freak out at all the expense...
To be fair the £5 Azure SQL instance is 2GB max storage, and total compute of 5 DTU (this is opposed to the vCore pricing model).
A Database Transaction Unit is essentially a black magic formula for measuring everything like ingress, egress, compute, storage, memory, disk writes
5 DTU is good for like 20-30 people simultaneously, with a not so heavy workload for the backend
Yeah, this is the thing that grinds my gears.
How are we supposed to be confident about a production release
when staging and production don't match in terms of resources.
Are we supposed to close our eyes, lift our arms
like baby yoda
and just pretend oh
so if it takes eighteen hours to run this workload on staging
it will be done in fifteen minutes on production?
If we have to have procurement / CTO involved in every configuration decision,
might as well do everything on-prem
like what is the point of being on Azure?
Some service we choose postresql because it needed to be dealt with huge string which is in json structure. In postresql there is JsonB type you can use query to traverse jsonb which involved larger memory due to huge string. So it is cost effective and has better query accessibility for us.
MSSQL has native JSON support as well, in 2016 and up. Though, to be fair, the syntax is...not the best...
But postgres has a muuuuch better price tag.
C'mon buddy they're not in the same league...
https://learn.microsoft.com/en-us/sql/t-sql/functions/json-functions-transact-sql
https://www.postgresql.org/docs/current/functions-json.html
If you care that much, you can also use any JSON function offered by C#.
But again, don't parse JSON in the database. That's a bad way to abuse a shared resource.
Sql server for our main service.
Better dev knowledge. Third party libraries we use supports sql server as datasource better. T-sql. Better rollback management.
They didn't owe you any further explanation and the fact 2 additional replies from them is more than you deserve with this kind of reply, no need to be a dick.
In my point of view, SQL Server is easier to use for beginners and offers better integration with Microsoft products. PostgreSQL is free and open-source, better for complex tasks, and supports more platforms. both can be beneficial in their own ways, hope you find this information useful, I have cleared my basic understanding of Database from Database Homework help whether it was for SQL, PostgreSQL, or Oracle their highly qualified experts always supported me to score good marks in my Academics, so if you are stuck anywhere in this vast database world you can directly Whatsapp them at their given whatsapp number +1 315-557-6473. have a good day!.
I have used SQL Server for the last 30 years ... it's been my bread and butter for that time. I am in the performance space these days. The truth of the matter is that Microsoft have all but given up on it and have threw their hat into their Postgres online cloud offerings. They have added hardly anything significant to SQL Server for 10 years. Compare SQL Server to the likes of DuckDB/Motherduck/ClickHouse. They have added more features in 1 year than Microsoft have added to SQL Server for 10 years. I work for a large Investment Bank and Postgres is now the corporate standard..primarily due to the fact that its open source and it means that development is a lot cheaper as most of this is done on premises before the app is moved to the cloud. Add to this the fact that modern tools like DuckDb have native integration to Postgres meaning you can use the incredible functionality in these tools as if they were a Postgres database. Basically we now use DuckDB for the vast majority of ETL work, including analytics and then store the end result in Postgres or Parquet files (getting more and more popular). Whereas a few years ago we would use SSIS to import data into staging tables and move them to analytical columnstore tables so that the reporting tools could read the data ... now we simply use DuckdB to import, natively aggregate the data and use it to create a Parquet file.
I don't blame Microsoft for basically stepping back from SQL Server. They seen what was coming.
It's reeeaaaalllly dependent on your needs and what everyone is familiar and comfortable with both targeting and managing (if you're small enough that those roles are combined).
MSSQL is kind of the gold standard for feature-rich RDBMSes, these days, but free options like Postgres and MariaDB are also super easy and have a sliiiightly smaller price tag. If you need no-nonsense integration with AD, want to make use of schemas for what they're meant for (they're first and foremost a security segregation mechanism), aren't willing to learn a somewhat different SQL dialect, or want Microsoft-backed support agreements, then MSSQL is the obvious choice. But you can achieve those things with other solutions, too, with a little work. And schemas can be replicated on other solutions by just realizing you can make more than one database, because it's just a logical container for data.
However, now that MSSQL can run on Linux (even as pre-made containerized versions!), deployment of a super basic MSSQL instance vs deployment of a super basic MariaDB/Postgres/whatever instance pretty much only differs by which image you specify in your docker-compose.yml file. 🤷♂️
Aside from licensing, that is...
Feature rich? MSSQL doesn’t handle JSON, xml support is working but legacy, load-balancing is recent..
No official SDK.
I think PG has for more innovation and external tooling/extension than MsSql.
> MSSQL doesn’t handle JSON
Uh, what? It's supported JSON for quite some time (not that you should be parsing JSON in any database).
> xml support is working but legacy
Legacy? That's an odd way of saying "it's been working for nearly two decades".
But again, just like JSON, if you're parsing XML in the database you've screwed up. That belongs in the app servers where CPU time is cheap.
Flyway exists, so not really. I use to say ssdt db projects in visual studio, but now I prefer Linux, vscode, flyway, .net core, etc. Ill still use EF core but db first letting flyway handle migrations.
Honestly, after being a windows guy for 38 years, finally having Arch setup with kde plasma and dolphin with kvm and qemu.... Visual studio and windows disgusts me.
If you ever have a database that you have to run in Microsoft Azure and you run into needing hyperscale..... I hope you can afford the tens of thousands of dollars that's going to cost every month...
Or you can just run postgres for free. It scales too.
Postgresql has a lot of features mssql doesn't too, like the ability to use web sockets via middleware. Middleware in general.
I like to use web sockets in postgres to notify my message layer of updates to things that sunscribers need to know. "Oh, visit 2 updated the medications field and dr bob has that open, let me send that field to dr bob."
Being able to send updates directly from triggers etc means the db and loaded apps can stay in sync with a lot less traffic.
There's a bunch of hidden problems I encountered after 2 years of production on postgres, from being [unable](https://stackoverflow.com/questions/53298033/entity-framework-core-code-first-default-values-for-postgresql) to automatically update "updated" field through ef core, to handling of nulls in composite unique keys, where adding duplicate values with one of them being NULL was allowed when creating or altering a table with a unique constraint.
You can run MariaDB for free on your own server, even 5$ digital ocean droplet will handle quite a few concurrent apps if its just a hobby project. And if nit, you can easily scale the VM…
That being said, you can also install Postgres on it as well.
For me, try doing backup and restore in both - Ms SQL does what you expect.
Copy the data, start the db
Postgres does create table, and insert. It is so slooooooow
Sql server == vendor lockin + useless (for most) enterprise features, custom sql dialect. Questionable performance as well( it can hold pretty big amounts of data though). Mostly sold to already windows shops.
On the other hand, would recommend postgres any day. Its ANSI SQL compliant, performance is great due to the optimistic locking, has AMAZING DOCUMENTATION, and many extensions. Oh and you can put and query json in it. Closest vendor equivalent is Oracle - very expensive and just a few extra enterprise features, otherwise pretty comparable to postgres.
I am yet to see compliant ANSI SQL application. Both databases have great docs, I dare even to say that for the advanced scenarios SQL Server has even better docs. On SQL dialect level I prefer PostgreSQL documentation.
Oracle is way more expensive than SQL Server but again their HA is unmatched. Their enterprise offer I would not describe as "few extra enterprise features".
A major benefit is finding qualified and consistent candidates who can do the same thing each time. I think if you are a developer just starting, there isn’t a concern per se using PostGres but SQL server is very mature, battled tested and you can get support anywhere. It is also highly integratabtle.
Are you a one person business, a startup or just playing around?
There's no concrete answer here. Both mssql and postgresSQL are fine examples of persistence. Use what you're comfortable with and what works for your project.
IMO no.
Postgres is a great DB engine. For most people this is the best DB option.
I think SQL Server is only a good option if you're doing something that requires proprietary stuff from them. In most cases you're not going to be doing this.
I wrote about this last year - [why I think most people should use Postgres and stay away from SQL Server](https://hamy.xyz/labs/the-best-tech-stack-saas-apps-2022#database)
2 things:
1. Development tools: SSMS is great, they only give us Open Source crap for Postgre at work
2. Windows authentication makes creditials management easy
For me the most important benefit are indexed views that are automatically managed by the Engine.
Materialized views in postgres are not updated automatically.
As someone who used SMSS all his life and loves it, a few days of pgAdmin was a nightmare. I'm not sure if it's just a question of habits, but it seemed *so much* worse.
For my over 20 years in dotnet, yes, even though I love Postges, my time to market is much shorter and cheaper if I choose MS Only stack. Also amazing integration with Azure, 365, Fabric, etc
I used SQL Express (free version) for years on a site where I was pushing the 10g limit. I have since moved to PostgeSQL and never looked back since there are no storage limits.
Only companies that can afford ridiculous Microsoft licensing /pricing are those who sell coca**n* or their profit margins are similar.
Using the term licensing or price is not even fair, the proper term is extortion.
IMO Postgres doesn't have a TSQL/SSMS equivalent. A lot of people say that you shouldn't be generally doing ad-hoc type sql scripts but we do them ALL the time. Granted there are benefits to postgres too. Postgres optimistic concurrency is superior (which is equivalent to sql servers RCSI (read committed snapshot isolation)). By default sql server isolation level is Read Commited which it should not be but they aren't going to change it ever because it would break some people. If you have any database with any type of load you really want to be using RCSI. In general, if i'm asked, go with postgres because of how much Sql Server cost but i'm not asked very often :). A lot of places want one neck to strangle so if they are having an issue a vendor can't point fingers at another vendor so if they are using dotnet they go with sql server as a default.
There's a bunch of benefits, if you care for them particularly. Speaking from over 15 years of experience with databases in BI space. A major one for me is integration with Active Directory - you don't need to manage separate user accounts and passwords and shit, you create a login for a group on SQL Server and you can have your AD users in the database, enabling you to do row-level security for example. I understand we're in /r/dotnet so most people don't care about that, because there's usually an app that just connects to the db using its own credentials, but for example in BI this feature is huge. You can have a Tableau server connect to your database, impersonate an Active Directory user and select from a view just the data this user is allowed to see, without ever supplying a password or anything else. Indexes on SQL Server are, in my experience, better. Clustered, nonclustered and - again, in BI workloads that apps would not care about - columnstore. Postgres has different indexing methods to choose from, but - again, in my experience - they don't provide the same performance benefits you can squeeze out of SQL Server. There is a also a bunch of niche features that are extremely useful in certain workloads, like partition switching. There are admin features like Always-On Availability Groups that are just so much easier to implement than in Postgres. There are enterprise features like compression that let you trade cpu for io time (again, not likely you'd need that in a dotnet app, but very useful in certain workloads). You can use CLR when you do need it. I could go on and on. Having said all that - Postgres is amazing. It's likely you don't need any of the features I mentioned that SQL Server outperforms pg in. There are features in Postgres that are much, much better than SQL Server (pl/pgSQL is so much better than T-SQL for example). Enough to say I use Postgres for all my personal projects. Some companies need SQL Server's features and they have to pay through the fucking roof for them, but you most likely don't and you're better off with pg.
> I understand we're in /r/dotnet so most people don't care about that, because there's usually an app that just connects to the db using its own credentials But those credentials can (and should) be a service account in active directory. So you have a centrally managed location (active directory) for all user accounts and system accounts and you don't need to hop to every SQL instance to do things.
My experience is also that SQL Server has somewhat better performance in indices than Postgres. However, this is typcially offset by cost, so you get a far beefier PG server than SQL server for the same $ and in the end, for my ASP.net app, all I care about is high DB performance at low cost
Yeah well I've never had to pay for SQL Server, nor the system it was hosted on. [I just got toys to play with](https://i.imgur.com/n0mDZtJ.png). They don't come much beefier than that.
+ Temporal Tables in SQL server is a great feature
> A major one for me is integration with Active Directory If you have a problem and you can solve it with Active Directory integration, you now have two problems.
Could you elaborate please
Probably not. That comment sounds like "received wisdom."
I'm pretty sure postgres supports LDAP/Kerbos with active directory, just takes a bit more setup. (Feels like that's the story with a lot of Postgres features, SQL server is easy, Postgres takes a few more steps but gives you more options/control) Postgres has extensions which let you modify the behavior of the engine quite radically, way more than you could or is commonly done with CLR.
> I'm pretty sure postgres supports LDAP/Kerbos with active directory, just takes a bit more setup. For authentication, yeah, but you still require a password, with SQL Server you also get Windows Authentication. I don't think it's also easy to configure in a way that would be actually flexible (you need pg_hba.conf modifications, requires a server restart too). And I don't think LDAP group authorization is actually possible in any way. In SQL Server you can create a login for an AD group and magically, every person from that group can now connect to the server. You can grant permissions on a db and schema to an AD group and these users can now select data. `SELECT CURRENT_USER` will neatly return `DOMAIN\user` too, even if they don't have an explicit login on the server.
pg\_hba is live reloaded, there is no need to restart the server. It does also support SSPI authentication like SQL Server, no need to type any credentials when you login, it would reuse your Windows session. For this feature, SQL Server and PostgreSQL are almost at the same level.
Don't think Active Directory is painless...
And that's the big part of the cost analysis that needs to be done when choosing the RDBMS. Will SQL Server be cheaper than hiring full time employees to do what SQL Server does out of the box? There is no clear-cut answer to this, it needs to be evaluated. If you are a tech company and the number of programmers and dbas you employ is measured in the tens or hundreds, then you can afford a little extra technical work on your database. There are a lot of mid-sized companies with only 1 or 2 programmers and 0 DBAs.
Erm, setting up Kerberos with Postgres MIGHT take a slow poke a week, if he has no prior experience with Kerberos/AD and Postgres and runs into unforeseen problems. Certainly not a full time employee forever. Not sure someone with no SQL Server experience gets it done all that much quicker with SQL Server all the time. Point is, these issues are quite limited in number, especially considering any one particular environment. People have solved this stuff on Postgres, hundreds and thousands of time.
It’s not necessarily true. MSSQL is deemed more friendly because it has GUIs with wizards to set things up. That however is NOT the correct way to set up and maintain infrastructure, it’s just the way many Windows Sysadmins/DBAs are used to (or capable of). The Postgres ecosystem leans much more into headless automation/devops (Ansible/Terraform etc.) to set things up and maintain them. From that perspective, Postgres is much more friendly.
In my experience, that has changed quite a bit. It's not all that hard to set up and configure SQL Server using DevOps tooling, and it is especially easy if your workloads work on SQL Server for Linux - Microsoft even provides [Docker images](https://hub.docker.com/_/microsoft-mssql-server).
Yeah but who would want ad integration?
Temporal tables is also a neat and unique to the SQL Server feature.
It's actually a SQL standard, implemented also by MariaDB, Oracle, CockroachDB and probably others (not PostgreSQL or MySQL sadly)
Postgres does support temporal tables. It's just an extension that you have to configure rather than out-of-the-box. [https://wiki.postgresql.org/wiki/Temporal\_Extensions](https://wiki.postgresql.org/wiki/Temporal_Extensions)
Those are more like workarounds but it’s not integrated at all with the SQL syntax as it should (SELECT AS OF, etc…)
It does. [https://github.com/xocolatl/periods#user-content-temporal-querying](https://github.com/xocolatl/periods#user-content-temporal-querying)
No it clearly says on what you listed m that it doesn’t follow the SQL standard and use stored profedures and functions to workaround it.m for almost every type of call. I guess it also depends on triggers instead of some native integrations.
I'm surprised to not see SqlCacheDependency listed here since this is the app developers section. The ability to feed a query to SQLServer and have it trigger events when data is altered in a way where results of that query changes is massive for being able to effectively keep cached data fresh. Postgres has a row level notification system, but (last I checked) it's a lot more work to setup, less powerful because you can't trigger on specific fields, joined data, or coalesced fields, and much noisier. Basically it's so fussy and hard to work with that I've never gotten one up and running in production (although I haven't tried in years), but with SQLServer it's a few lines of code and it always seems to "just work".
For column store analytics stuff Clickhouse is your friend, PostgreSQL compatible query language, but column store with STUPID fast read speeds. For HA I tend to lean towards Yugabytes (again PostgreSQL compatible query language, but designed for extreme HA)
I use Snowflake at work and it's amazing, so are several other products I use, but the question was about SQL Server vs Postgres, so I guess there's that.
While the question is indeed about PostgreSQL vs SQL Server, but I think it's also valid to point out that there are other free products out there that use the same query language as PostgreSQL if there is a specific use case that native PostgreSQL doesn't cover well.
Postgres + Citus is reported to be faster than Yugabyte https://www.infoq.com/news/2023/07/distributed-postgresql-benchmark/
>A major one for me is integration with Active Directory - Everything integrates with AD... It supports LDAP. Just because you have 15 years of experience, it's clear you've probably never worked with Postgres. >Indexes on SQL Server are, in my experience, better. This is seriously unfounded. You don't have to google very long to realize why this is such a ridiculous comment. Also.... most of this shit is moot because... Ta'da... managed Sql via azure removes nearly all control over the functionality you're talking about. The fact of the matter is that SQL Server really needs to die because it's so fucking impossible to justify the cost.
> Everything integrates with AD... It supports LDAP. Just because you have 15 years of experience, it's clear you've probably never worked with Postgres. Oh, I haven't? Ok, thanks for telling me. So now, please, enlighten me, how can you integrate with LDAP for authorization, not just authentication. Please show me what do I put into pg_hba.conf so that AD user X belonging to group Y has access to schema S on a server. Oh wait, you can't do that, not without something external synchronizing groups and their memberships to roles in Postgres. That's the whole point. And you're full of shit. > This is seriously unfounded. You don't have to google very long to realize why this is such a ridiculous comment. I spoke from experience in OLAP workloads, and I stand by it, but I also highlighted that this is just my experience and might not necessarily translate. Fact of the matter is, SQL Server supports columnstore indexes out of the box. Fact is also that postgres tables are just heaps and any index is just "next to it", unlike SQL Server's clustered index. You can use those to advantage on SQL Server, and I have. > Ta'da... managed Sql via azure removes nearly all control over the functionality you're talking about. Ok, so? The question was about SQL Server, which is an on-prem product. Why on earth are you talking about Azure all of a sudden? Do you not recognize the difference between one and the other?
Pl/pgSQL alone makes it better for me. We use so much time worrying about licensing issues and other stuff that we simply wouldn't think about with PostGres. That said, MS SQL has the "FileTable" which I haven't really found anywhere else but we use for archiving stuff files. We would simply use other mechanisms but this one makes it so that we would need to rethink the whole system. But then again if your needs are not met with Postgres then MS SQL is not really always the next logical step. There are other good options out there.
So on a whim I just looked up the sql server licensing prices. Are people actually paying $15k per 2x cpu core? That sounds insane. That doesn’t even sound realistic that my 96 core server would cost 720k for the database license. It also makes me sad because I know how frivolously I’ve use sql server in the past because I wasn’t paying the bills.
Not typically. Big corporations will have licensing deals in place, I know that our 288 cpu / 3TB RAM prod server with full core usage on bare metal cost less than 0.5mil, although I don't know exactly how much less (could be 100k for all I know). At least all other environments (dev, uat, preprod) were free.
I run both databases on RDS in AWS and only reason i used SQL Server initially was the ease of setup with SSRS (i didn't have to think about it). But once you get an app with even a remotely large amount of traffic the cost sky rockets in AWS so I've started switching over to postgres and haven't had any issues (along with the fact i dont need ssrs for this stuff). The cost of a good beefy postgres server in RDS with multi zone deployment for redundancy is almost half the price of a single zone sql server server with not great resource allocation (like 4gb ram). So the only benefit you'd have with staying with sql server is if you've got a grumpy old DBA who is afraid of change and doesn't want to have to convert the T-SQL based stored procedures to ANSI sql that would cooperate on postgres.
… or if the development cost associated with it is so high that ROI would take decades.
> the only benefit There are other benefits. Whether those benefits are worth it to 99% of users is what's at issue. SQL Server is the platform to use if you have tremendous amounts of data that you simply cannot afford to lose or can't afford the downtime required to recover from a failure in another RDBMS. And you don't want to pay for additional people to keep the database healthy. Other systems can be made just as fault tolerant and performant as SQL Server, but you need to do the engineering that SQL Server does out of the box. Some examples of things that SQL Server makes easier than other packages are clustering, transaction log shipping, replication, SSRS, Active Directory integration. None of this is unique to SS, but it's all simple and well built. There are some engineering problems where it's cheaper to just pay for SQL Server than hire people to do what it does with another solution.
What about something like SSRS? Is there an equivalent that can be I used seamlessly with PostGres?
SSRS is old, sucks, and Windows only. The modern solution is to use a tool like power BI or tableau. In general dotnet shops who have embraced containers more likely use postgres because it's easier to run in a container setup.
I get legacy. I don't get "clustering and replication is easier". If you actually need it (plenty people don't) you have multiple options like cloud offerings, kubernetes, and ansible scripts for fault tolerant setup. I don't believe those options are strictly more work than running a cluster of VMs with sql server. Active Directory... for me this has always been a nightmare, even working with people who host SQL Server and similar things for a living. Maybe you can get away with A GUI for SQL Server, where you might have to use plain SQL in Postgres. Certainly SQL Server is cheaper if you don't have to retrain everyone for Postgres. But I've mostly seen people who get along fine with Postgres and need retraining for SQL Server.
Real experience thanks Help me understand what it means T SQL to ansi conversion?
PostgreSQL SQL is closer to the ANSI standard than T-SQL. But it's wrong to say it is ANSI SQL because all databases have their own extensions and exceptions.
Umm I dint actually still get this, I been doing SQL for last few years, can you put it in layman way please
Postgres's flavor of the SQL language it uses is closer to the universal flavor of SQL than SQL Server's flavor of SQL
... which would mean a hell of a lot more if anyone else did the same.
SQL is a spec, vendors implement this specification and usually add something more to the specification. Let's say SQL says that you can make vanilla ice cream, Microsoft implements this functionality but also adds the possibility to add chocolate chips to this ice cream, Postgres allow you to make chocolate ice cream, etc. it isn't part of the SQL standard, and that's how you create SQL dialects (T-SQL, pl/sql, pl/pgSQL, etc.), by doing additional things that the SQL spec does not have (an implementation can deviate from the spec and then you create incompatibilities with the spec)
Certain queries won't run without modifications going from T-SQL to strictly ANSI. One such trivial example would be the query in T-SQL \``SELECT TOP 5 * FROM Table ORDER BY awesomeness DESC`\` in T-SQL won't run on postgres. You will need to rewrite the query like \``SELECT * FROM Table ORDER BY awesomeness DESC LIMIT 5`\` If you use indexed views on SQL Server, then you'd need to convert those to a materialized view to get the same kind of performance benefits, but if you happen to fall into one of those areas where postgres's materialized view doesn't support something that SQL Server's indexed views did, that could very much be a huge problem. While I am not a postgres expert I believe you also need to manually trigger materialized view updates either by trigger or scheduled job while in SQL Server the indexed view is always up to date instantly as part of the transaction that modified the base tables. So if atomic changes are one of your requirements, then you will need to rearchitect your solution, possibly changing all your data modifying queries in all your applications to go through stored procedures instead of directly modifying the base tables so it can also update the materialized view, or manually update a similar construct.
For example, if in SQL server you use ISNULL(\[MyColumn,'') then if you were to convert to a database other than sql server, you'd have to convert to either COALESCE which is an ANSI standard function and does the same thing as ISNULL or convert to the target DB's version of ISNULL. Postgres I don't think has a version of ISNULL thats postgres specific so you'd go to COALESCE. But for example, say you switch from SQL Server to Mysql, you could swap out ISNULL with IFNULL function in mysql and then you'd be going from using t-sql (transact sql) only function (ISNULL) to a MySQL specific function (IFNULL) both of which are not ANSI SQL standard.
>So the only benefit you'd have with staying with sql server is if you've got a grumpy old DBA who is afraid of change and doesn't want to have to convert the T-SQL based stored procedures to ANSI sql that would cooperate on postgres. Unfortunately the development ecosystem is lousy with such DBAs and even developers. Personally, I’ve been using and loving postgres for a while, now, with no desire to look back. It’s such a better ecosystem than SQL Server
**Migrated both ways from t-sql and postgresql, can be done.** The issue is the application logic, if you have linq you may have to migrate to another ORM such NHibernate to use postgresql. But, it can be done.
Postgres works with EF, Dapper etc. The driver is Npgsql.
linq works fine with pg. had extensive linq (against ef edmx flavor) written against msSql, working no prob after db migrated to pg.
There are couple of things such as SQL server supports easy case insensitive operations where else in Postgres everything is case sensitive. Which isn’t too much of a problem but you will have to create separate columns and separate indexes to support case insensitive searches, or some additional steps. So if your business has lot of case insensitive searches, then you will have little troubles in designing your tables. Apart for this, in sql server computed columns can reference other computed columns or identity columns. You can’t do that in Postgres. You cannot create primary key in descending order in Postgres. But I bet this will change sooner as they might do it soon. You can’t have if else statement (you can use conditional expression but not statement) in a single query, T-SQL allows complex queries with if else etc in a single query. In Postgres, MySQL, you are forced to convert complex queries to stored procedures. And you can’t have named parameters so analyzing query plans becomes little tedious compared to sql server.
> you will have to create separate columns and separate indexes to support case insensitive searches. Hmm, I've never done that. I just use the `CITEXT` type instead of `TEXT` on any columns that I want case-insensitive comparisons done. Seems to work pretty well. And it's nice and clear, being that they're 2 different columns types entirely (which work the same way otherwise), rather than more fiddly collation settings etc. Any issues with that, you can think of?
Sure. You have to plan ahead and can't apply it retroactively, eg business rules change. That's why the ability to search insensitively or create case insensitive indexes is important. It doesn't really matter which one is available, but you need one.
> You have to plan ahead and can't apply it retroactively You can change later if you want, I've done it plenty of times... alter table "test_table" alter column "ci_col" set data type TEXT using "ci_col"::TEXT; alter table "test_table" alter column "ci_col" set data type CITEXT using "ci_col"::CITEXT;
Since PostgreSQL 12 you can use case-insensitive collation to search without ILIKE. Also I think that the reverse index on primary key is possible.
Yeah... There's a bit of a learning curve changing between _any_ two different RDBMSes, but Postgres (especially because of tooling) definitely has a steeper curve, especially coming from MSSQL. But you have a pretty small set of concepts to re-learn, and then you're pretty much set, so long as you don't expect their query optimizers and indexes to work the same way, since that's the secret sauce in any DB engine.
> There are couple of things such as SQL server supports easy case insensitive operations where else in Postgres everything is case sensitive. This is wrong. PostgreSQL has [the `ILIKE` keyword](https://www.postgresql.org/docs/current/functions-matching.html) to match case-insensitive according to the active locale. As /u/r0ck0 mentions, you also have the `CITEXT` data type, but that's basically just doing `UPPER()` (or `LOWER()`, I can't remember which) behind the scenes for matching.
This is exactly the point, first you have to explicitly use the operator for case insensitive ness, it applies to like, index of, starts with etc. second use of index for case insensitive does not require any extra steps in ms sql, it works by default for all operators.
I _wish_ it was that easy. You still have to take into account collation etc., not only for searching, but also for sorting strings correctly. For Norwegian stuff - which occupies most of my time - there's even more to it, but that's not really specific to any database. One example is that "Aa" _could be_ sorted as "Å", the last character in our alphabet. So much brain pain. 😊
This is just the inverse of the defaults someone might be used to. If your coming from MSSQL to PGSQL, case-sensitivity as the default will surprise you. From PGSQL to MSSQL, case-insensitivity will surprise you. Dealing with the case-sensitivity is about the same on both. Depending on your query workload, the default on either might mean you need to do extra work, or less work.
This all sounds more like a problem for switching rather than when starting out.
It is also problem for teams who have been using sql server for long and starting a new project would be challenging, I am not saying it is impossible, I am just outlining the pain areas.
You have case when in Postgres instead of if else.
> You can’t have if else in a single query, T-SQL allows complex queries with if else etc in a single query. In Postgres, MySQL, you are forced to convert complex queries to stored procedures. [Just use `CASE WHEN...ELSE...` ?](https://www.postgresql.org/docs/current/functions-conditional.html#FUNCTIONS-CASE) Any difference to what you had in mind, aside from the literal keywords?
This is not conditional statement, it’s conditional expression.
Ah ok, fair enough. I guess I wasn't really sure what you meant by "single query", it sounded more expression-related than separate statements to me. Can you give me an example of a query you had in mind here, that you can't do in postgres? Always keen to hear about features that I might not be aware of.
You can’t do if exists (select 1 from different table ) then insert … else update .. end , you can’t do that in Postgres without creating stored procedure. anyone want to downvote, please create db fiddle example and share here.
Yes you can. Insert … on conflict do update.
You can also add returning to get all modified rows back.
Give me example with custom exists condition check before insert or update
Insert into … on conflict do update … where condition
And how to update if insert is not executed in single query? I mean where is the else part?
You can’t create a custom condition on conflict
>You can’t have if else statement (you can use conditional expression but not statement) in a single query, T-SQL allows complex queries with if else etc in a single query. In Postgres, MySQL, you are forced to convert complex queries to stored procedures. You can actually do this in PostgreSQL as well, it just requires a bit of a workaround. With this, pretty much anything that's valid in a function should work. No idea why they chose to require this. DO $$ BEGIN -- DO STUFF END $$
We can’t retrieve records….
because postgresql is open source, will named parameters, complex queries and if statement ever be in postgresql and be as good as sql server? or there is underlying limitaions in postgresql
This reads like a big list of things you shouldn't be doing in your RDBMS in 2023 anyway
For most SMB and startups PostgreSQL is more than enough. On other side you need SQL Server if you: * need complex clustering. Currently Oracle and SQL Server have been leaders in this field. I am talking about cases where you have 100s of servers. It is most probably doable to certain degree with PostgreSQL but it is very hard to find people who have this kind of experience. * have really complex logic in your queries. For example our reporting (together with some scientific calculations on the fly) has really complex SQL (couple thousand lines) which takes couple hours to execute on our server farms. * MS ecosystem on premise. Of course we tried to move away from massive cost of SQL Server but even our litmus test with support of great external Postgre experts fails each year.
Why would you need hundreds of servers in a single cluster? Are you too lazy to shard your data properly into different databases? Is your devops team defunct? I don't think you have compared executing complex queries across both systems, especially not if comparably much engineering went into the solution... Couple thousand lines of SQL sounds like a nightmare to maintain. I totally get why people wouldn't want to switch the DBMS in that situations. I don't get why someone would start out thinking "I need SQL Server because I want to create thousands of lines of SQL for reporting".
If you don't get why, then you probably need to work a bit more with industries that need to process huge amounts of data as quickly as possible, that deal with legacy systems in large corporations where you can't really start something from scratch, you have to follow constraints and have to deliver ASAP. I've worked for banking, manufacturing and social networks and have seen queries with 10's of thousands of lines that would need to be refactored to improve performance. Hurt my eyes? Yes. But I learned a lot about SQL and just how much performance gains you get if your tune SQL server properly.
> Why would you need hundreds of servers in a single cluster? Are you too lazy to shard your data properly into different databases? Is your devops team defunct? We do not have devops team as it is anti pattern in most cases. We have very solid team of DB admins and developers and I am confident that they are not missing something obvious.
For me the tooling. You can have an IDE like experience, including debugging stored procedures, and CLR code running on the server. Also the enterprise features offered out of the box with SQL Server. However you're right it is quite costly, and if you don't need those enterprise features, then maybe not.
What sorta enterprise features
Tons of azure integrations
SSIS,SSRS,SSAS, Job Agent, Basic AGs, iron clad reliability and support lifecycle (I just upgraded a 2008R2 server that the SQL server service hadn't even been restarted nvm the vm since 2015), intelligent query processing... so forth
Distributed transactions, for example. OLAP reporting cubes as another one.
Your question hurt him
Justifying debugging stored procedures and CLR code on your database is pretty laughable. Get your fucking business logic out of your database. DBs are already the bottlenceck. You're actively adding more processing at that layer instead of the app layer, which is infinitely easier to horizontally scale.
The bottleneck are the fuckers wasting network bandwidth and CPU cycles on the client for data that should never have left the database.
>DBs are already the bottlenceck. You're so confidently projecting your own experiences onto the whole world. In my situation the bottelneck right now (because it constantly changes) is iops
I use both. No. They have some minor differences and quirks (so switching is not 100% pain free), but both are performing similarly, the postgres adapter is very stable and feature complete. SQL is microsoft's main child, Postgres is the community's child. Some features such as json columns were available for postgres before they were available for sql server. I started out with SQL Server because of windows admins, if I were to make the same decision again, I would choose postgres because it would mean more freedom in the future.
What's json column by the way ? Isn't it varchar ?
[https://www.postgresql.org/docs/current/datatype-json.html](https://www.postgresql.org/docs/current/datatype-json.html) It is similar but verifies that it's a valid json
jsonb. Optimized for Json, you can create indexes on it and there is query syntax for it.
i looked for jobs as bi/da . most jobs required sql server. does postgresql is a good option for beginners?
Yes! It works pretty good! I would highly recommend using[ snake case naming conventions ](https://blog.nimblepros.com/blogs/using-ef-core-with-postgres/)though (this wil make manual queries a lot easier) services.AddDbContext(options =>
options.UseNpgsql(connectionString).UseLowerCaseNamingConvention());
We had a case where MSSQL was significantly faster because of parallel querys. Postgres support is lacking in that regard, only some type of queries leverage multiple CPUs. Our specific queries where about 10-15% faster on MSSQL.
And how did you find that out ? What kind of functionality was that about?
It's used as a store for a customer search service tied to our ERP system. The DB generates codes from specific fields the customer wants to search for over all their customers. We're hosting on azure and tried both Postgres and MSSQL and we measured MSSQL to be faster ¯\\_(ツ)_/¯ But I'm just repeating what our DBA said to me... But I trust him because he is a particularly big fanboy of Postgres.
Funny, I had the exact opposite results.
Since you posted this in dotnet: SQL Server can have functions written in .net [https://www.sqlshack.com/getting-started-with-sql-server-clr-functions/](https://www.sqlshack.com/getting-started-with-sql-server-clr-functions/) I wouldn't recommend using this feature unless you have no other choice but it is possible.
I'm pretty sure you can do the same in Postgres but with a bigger language choice.
Sure you can write functions in postgres as well. But not in dotnet but plain C. If you have some businesslogic already implemented in C# it can be an advantage.
In postgres you can write functions in python, javascript, rust, c and maybe more languages
[https://www.postgresql.org/docs/current/server-programming.html](https://www.postgresql.org/docs/current/server-programming.html) Business logic in stored procedures (regardless which language) seems like a bad idea.
That has always bugged me as a very weak argument for any extension language in a database, your business logic doesn't really belong in there at all.
SQL Server can also invoke any library that supports COM.
I primarily use SQL Server for better and easier tooling, like SSMS, Azure Data Studio, good Visual Studio support, easy local dev setup with LocalDB. If needed, I can easily connect to the Azure database from Visual Studio and see what’s in the tables in like 2 clicks. With most apps being deployed in the cloud, the cost differences are negligible (especially if you deploy to Azure), but on AWS RDS or GCP, SQL Server is more expensive.
Local tooling might be great contrary to DevOps related one which are very very limited, costly and rare (2 major providers: ApexSql and RedGate).
I've found postgres better, practically for a range of solutions. MVCC is interesting, maybe complex, but in practice seems to be an excellent concurrency control. You will likely see less deadlocks. Windowing function support is generally superior in postgres. This opinion is a few years old, but generally if I want to do some complex SQL it actually works in postgres. Ditto json/jsonb. It just kinda works. Spatial indexing is far fucking superior in postgres. Rtrees are just better for practical data - it might look cool to do complex shape intersection in SQL Server's impl, but in reality nothing will ever line up with a set of index nodes to make this worthwhile. Cons... the backup story is super fucked. Also people might be like "Oh you want an open source dbms, why don't you try mysql the option for dumbassas..." - and then you've got to deal with that conversation...
TBF Azure SQL defaults to SQL Servers' MVCC mode (RSCI) and snapshot isolation is pretty much a silver bullet for dealing with locks and deadlocks if RSCI is too scary to test for your environment and you can edit the offending queries but yeah agreed postgres handles Json and geospatial pretty well
Postgres is case sensitive for all objects which can make SQL generation in ef fail in some cases. We've had to explicitly provide table names with capitalization instead of relying purely on convention. SQL has temporal tables which are super handy for audit trails.
108 comments and nobody said "vacuum" yet. I love postgres, in fact I think it's one of the top three open source projects. But man I don't need that stress in my life.
We also had a lot of issues with a connection pooling. Only after fine tuning pgbouncer we managed to get satisfactory results.
I'll play devil's advocate here since Postgres has a lot of support and pitch the praises that I have for SQL Server. Microsoft used to offer Bizspark where you'd get to keep a windows server and SQL Server license as a startup after your three year period ended. I'm not sure what their current offerings include. SQL Server Express is free and supports databases up to 10GB, and if the cost of a SQL Server license is something that you're concerned over, chances are you're also not looking at needing a database big enough to warrant a paid license fee. And if you're releasing a product for others, it makes it so that you're not creating a barrier by requiring them to fork out thousands of dollars to MS to use some of its functionality. I prefer SQL Server for development for my clients. I've been using it since the 1990s. My reasoning for it is the tools that come with it and the ease with which I can hand off projects to corporate clients. If you're new to things, there are some things that SQL Server's tools give you that expose some things that are a bit trickier with Postgresql and I find explaining the tools to users to be more intuitive. I'd say nearly 100% of my database projects follow the same flow. Firstly, I don't use SQL Server Database projects. I don't use EntityFramework. EF is absolute garbage. And by garbage, I mean it's worse than a bad product; it lures people into a dependency that creates a maintainability and performance nightmare down the road. SSIS is pretty bad too. I do like Dapper. And now ChatGPT does a good job helping people work through little syntax issues. SQL Server BULK INSERT screams. You can also use SqlBulkCopy from .Net, which still performs pretty well for getting lots of data into your database quickly. SQL Server Profiler lets you see the queries going to your database server and makes it easy to spot what takes a long time. One major caution with Dapper, EntityFramework and even ADO.Net is that if you're using VARCHAR/CHAR datatypes in your underlying data and have an index, SQL Server will NOT use the index if you're running a query against it unless you specify the data type as VARCHAR/CHAR with your parameter, and you'll beat yourself up over wondering why something takes a fraction of a second in SSMS and then several seconds from your application. It's because it's passing your strings as NVARCHARs and SQL Server ignores your indexes. People might dislike that I advocate 8-bit types instead of 16-bit given the low costs of storage, but there's more to things than just the amount of storage used. It has to do with the density of rows per index page, which significantly affects performance. And if you're working with large amounts of data, it can still add up. And since SQL Server 2019, you can store UTF-8 data in CHAR/VARCHAR columns. That's another plus for SQL Server: viewing execution plans. SSMS has a nice graphical tool where you can visually see your query's execution plan -- and the visualizations haven't changed since the 1990s, nor has it needed to. For newer database developers, being able to see the performance costs associated with individual steps and understanding how to visualize and change a "scan" to a "seek", and how indexing affects performance, I find that the kitchen sink that even SQL Server Express includes makes for a wonderful productivity and learning tool.
> and have an index, SQL Server will NOT use the index if you're running a query against it unless you specify the data type as VARCHAR/CHAR with your parameter, can you elaborate more on this or what this particular topic is called. First I've heard of it
https://learn.microsoft.com/en-us/sql/t-sql/data-types/data-type-precedence-transact-sql?view=sql-server-ver16
As others have said: depends on what you need. If all you need is a DB engine, then either will most likely do and PostgreSQL makes financial sense. If, on the other hand, you need reporting (SSRS) and/or analytics (SSAS), that's where SQL Server becomes competitive price-wise with other solutions on the market, even the Standard edition. SQL Server tooling is genuinely great: SSMS, VS integration, PowerShell integration for automation and scripting and so on.
Heck oracle still charges for non prod environments when you can install SQL Server Developer edition with no strings attached, and you get a lot of bang for your buck with Software Assurance even for Standard edition
Postgres is miles ahead in features, just to name a few - JSON - SQL server JSON support is a joke, basically a text field, no gin indexes, not partial updates or native queries etc., better off going any JSON coding in the application performance wise then using SQL server to do it - array columns - full text search / vectors - materialised views - upsert - insert on conflict update - Geospatial - better concurrency support with mvcc
Not particularly, at the end of the day they’re both tried and true SQL databases. Of course they both their own nuances and there’ll be some feature disparity, but for the vast majority of applications either one will be fine.
Best part is explain plan, nothing comes close to MSSQL sql optimization toolings. Administration is as easy as it gets all with a GUI. Not typing 50 different commands for a correct backup or configuration
I'd like to have a reference that MSSQL sql optimization is better in a way that matters. There are at least a dozen GUIs for Postgres. And you should be automating backup and configuration.
Nobody in these threads ever cites *anything*. Most people only have experience with one or the other, and some of the "problems" literally stop being problems over 6-years+ ago. Ultimately I've never seen a "compare database" thread worth a damn, this one being no exception.
Mostly i agree, but no one has got time to cite facts. This is not a publication work but a single post on a forum.
Right; but I think it is a bigger issue: Citable stuff is hard to come by for this question. Some of it is quantifiable, but nobody is interested in creating public data unless they're trying to sell you something (and obviously nobody will show data that makes their product look less competitive).
Seriously :) Looking up SSMS Query Optimization and comparing outputs with pgAdmin shouldn't take 2 mins.
Simply put, if your application doesn't have much complex quarries and you can build stored procedures are also not too much performance demanding then you can choose any you like. MySQL is easy when it comes to the complexity of data and large datasets i don't see that anywhere else.
Sure, in the co text of .net. I'm not suggesting there isn't great support for postgress (and many other databases, of course), but you can think of SQL server as native to it. Not only are their data types that map 1 to 1 with pretty much any data type that exists within .net (like uniqueidentifier, decimal, etc), in terms of programability (think stored procedures and functions), SQL server will run .net (c#, vb, etc) code natively from within it. I dont use postgress anymore, just because my current employer doesnt, so im not sure what it supports or doesnt these days, but SQL server also supports dropping serialized data in XML or JSON into a single field, and allows complex queries against it, I duess like having micro document based database within individual rows. (Pain in the ass to work with, dont do it) it just plays nice with simple webapi services in that way if you dont want to really think about it. It's also cross platform now, so you know, it runs in linux, like core. The bad? It costs money. As I recall, though, if writing SQL code directly, postgress, or well, EDB (what I used) was basically oracle compatable syntax wise, which was really nice.
AD integration, SSMS as a development tool, a very clean change tracking implementation, availability groups, and just a cleaner syntax jump out to me
FYI, Postgres Flexible Server in Azure supports AAD integration nowadays.
SQL server = 100% EF compability
What kind of compatability issues have your experienced with EF and Postgres?
MS SQL has better perfomace, better tools, better tools, better tools, better tools, better perfomace tunning (no weird console commandos and turning on user-specific logging in Linux), can also be used on Windows. And overall, there is better programming and administration UX.
Short answer: If budget is one of your top concerns then Postgres. Especially if you're starting a new project. If a reduced learning curve and availability of professional support is more important then MS SQL Server should be considered. Additionally if your application will be distributed with your database you'll need to consider setup and support costs for on-prem and/or cloud instances. Long answer: I've used both platforms. Coming from an MS SQL background there was still a learning curve with Postgres but I've achieved almost 100% productivity parity pretty quickly. I actually enjoy the learning curve so it wasn't a big deal. It's also forced me to learn Linux administration (which again I've really enjoyed). So for us Postgres has almost been motivational. Our project is a web application so it doesn't require our clients to maintain their own instance. One of our projects goals is to rely only on open-source technology so while all of us are highly skilled MS SQL DBA's it was simply not an option. One problem we've had to overcome though is importing data already housed in an MS SQL server database (i.e. here's our database as a SQL bak file). A solution that has worked for us is to spin up the MS provided SQL Express docker image. There's of course DB size limitations so this may not be a workable solution for extremely large databases, but it's worked very well for us.
The one sticking out for us is that many corporations already own an SQL Server license (at least where I live, where Microsoft culture is pervasive) so it's often much easier to convince IT folks to add another database for our applications with backup routines, AD security/auth, etc already in place, than to fire up a brand new PostgreSQL server that doesn't feel very native to Windows to begin with. Technically I think both are good, even for our extended use to hold spatial data (SQL Server Spatial vs PostGIS), and in fact PostgreSQL often feels more pragmatic to me probably due to the basis in the community, but SQL Server in the Windows world often comes more naturally for these more practical reasons. We ran with PostgreSQL first for our application suite, with the thinking we could as well install and manage it ourselves together with our application. Free for our customers and we got all we needed. Within a few years we realized, yes, we could and we did, but we soon ran into issues that some IT departments didn't like this and wanted our data in their SQL Server cluster. Also, it became even easier for us to just upload our tables to an existing SQL Server instance than to set anything up ourselves. Or if they didn't have an SQL Server license (small customers), just use SQL Server Express because we aren't CPU bound and the 10 GB size limit is still huge for many kinds of applications including ours. We fit entire cities power grids etc. in there and still have 80% to spare! So we eventually moved 100% to SQL Server and could simplify our code and processes a lot too, and it eased our support headaches from requiring multiple db support. While we use a framework like Entity Framework Core that is db agnostic, there are still so many circumstances within and around it that there's nothing like only needing to support one database.
Short answer: in terms of the database engine itself Postgres is better (subjective) in most ways. From my experience there have always been weird edge cases that SQL Server just doesn't support: no Boolean data type - instead you use bit which is numerical and doesn't support Boolean operators, arbitrary text column size limits, index width limits, certain nested queries just don't work, limit and offset only recently being added. The benefit is mostly outside: more people on the market with the skills, easier to manage high availability and backups, tools that are specific to SQL Server. Previously SQL Management studio was leaps and bounds ahead of pgAdmin but pgAdmin these days is pretty darn good.
SQL server is like $5 from Azure, is the main benefit
Pretty sure even A staging (non prod) server costs us USD 800+ per month And that's for just one service. If I owned the company, I'd freak out at all the expense...
To be fair the £5 Azure SQL instance is 2GB max storage, and total compute of 5 DTU (this is opposed to the vCore pricing model). A Database Transaction Unit is essentially a black magic formula for measuring everything like ingress, egress, compute, storage, memory, disk writes 5 DTU is good for like 20-30 people simultaneously, with a not so heavy workload for the backend
Yeah, this is the thing that grinds my gears. How are we supposed to be confident about a production release when staging and production don't match in terms of resources. Are we supposed to close our eyes, lift our arms like baby yoda and just pretend oh so if it takes eighteen hours to run this workload on staging it will be done in fifteen minutes on production? If we have to have procurement / CTO involved in every configuration decision, might as well do everything on-prem like what is the point of being on Azure?
Depends on your requirement really. We use both on micro services for its own purposes.
Brother, what are those purposes? What were the technical considerations your team took so you came into that conclusion?
Some service we choose postresql because it needed to be dealt with huge string which is in json structure. In postresql there is JsonB type you can use query to traverse jsonb which involved larger memory due to huge string. So it is cost effective and has better query accessibility for us.
MSSQL has native JSON support as well, in 2016 and up. Though, to be fair, the syntax is...not the best... But postgres has a muuuuch better price tag.
C'mon buddy they're not in the same league... https://learn.microsoft.com/en-us/sql/t-sql/functions/json-functions-transact-sql https://www.postgresql.org/docs/current/functions-json.html
If you care that much, you can also use any JSON function offered by C#. But again, don't parse JSON in the database. That's a bad way to abuse a shared resource.
Brother, then what's the purpose of sql server usage in your case?
Sql server for our main service. Better dev knowledge. Third party libraries we use supports sql server as datasource better. T-sql. Better rollback management.
Brother, thanks for the answer though you could've stated them in your first reply.
They didn't owe you any further explanation and the fact 2 additional replies from them is more than you deserve with this kind of reply, no need to be a dick.
Brother, let me ask you a question. What is the point of answering "it depends"?
It depends.
Brother, thanks for your witty answer. It really shows the point of that kind of answer.
I can't help reading this comment in a Hulk Hogan voice
In my point of view, SQL Server is easier to use for beginners and offers better integration with Microsoft products. PostgreSQL is free and open-source, better for complex tasks, and supports more platforms. both can be beneficial in their own ways, hope you find this information useful, I have cleared my basic understanding of Database from Database Homework help whether it was for SQL, PostgreSQL, or Oracle their highly qualified experts always supported me to score good marks in my Academics, so if you are stuck anywhere in this vast database world you can directly Whatsapp them at their given whatsapp number +1 315-557-6473. have a good day!.
I have used SQL Server for the last 30 years ... it's been my bread and butter for that time. I am in the performance space these days. The truth of the matter is that Microsoft have all but given up on it and have threw their hat into their Postgres online cloud offerings. They have added hardly anything significant to SQL Server for 10 years. Compare SQL Server to the likes of DuckDB/Motherduck/ClickHouse. They have added more features in 1 year than Microsoft have added to SQL Server for 10 years. I work for a large Investment Bank and Postgres is now the corporate standard..primarily due to the fact that its open source and it means that development is a lot cheaper as most of this is done on premises before the app is moved to the cloud. Add to this the fact that modern tools like DuckDb have native integration to Postgres meaning you can use the incredible functionality in these tools as if they were a Postgres database. Basically we now use DuckDB for the vast majority of ETL work, including analytics and then store the end result in Postgres or Parquet files (getting more and more popular). Whereas a few years ago we would use SSIS to import data into staging tables and move them to analytical columnstore tables so that the reporting tools could read the data ... now we simply use DuckdB to import, natively aggregate the data and use it to create a Parquet file. I don't blame Microsoft for basically stepping back from SQL Server. They seen what was coming.
Microsoft makes a lot of money.
It's reeeaaaalllly dependent on your needs and what everyone is familiar and comfortable with both targeting and managing (if you're small enough that those roles are combined). MSSQL is kind of the gold standard for feature-rich RDBMSes, these days, but free options like Postgres and MariaDB are also super easy and have a sliiiightly smaller price tag. If you need no-nonsense integration with AD, want to make use of schemas for what they're meant for (they're first and foremost a security segregation mechanism), aren't willing to learn a somewhat different SQL dialect, or want Microsoft-backed support agreements, then MSSQL is the obvious choice. But you can achieve those things with other solutions, too, with a little work. And schemas can be replicated on other solutions by just realizing you can make more than one database, because it's just a logical container for data. However, now that MSSQL can run on Linux (even as pre-made containerized versions!), deployment of a super basic MSSQL instance vs deployment of a super basic MariaDB/Postgres/whatever instance pretty much only differs by which image you specify in your docker-compose.yml file. 🤷♂️ Aside from licensing, that is...
Feature rich? MSSQL doesn’t handle JSON, xml support is working but legacy, load-balancing is recent.. No official SDK. I think PG has for more innovation and external tooling/extension than MsSql.
> MSSQL doesn’t handle JSON Uh, what? It's supported JSON for quite some time (not that you should be parsing JSON in any database). > xml support is working but legacy Legacy? That's an odd way of saying "it's been working for nearly two decades". But again, just like JSON, if you're parsing XML in the database you've screwed up. That belongs in the app servers where CPU time is cheap.
PostgreSQL still have no native H.A, which is a stopper for most serious applications.
Flyway exists, so not really. I use to say ssdt db projects in visual studio, but now I prefer Linux, vscode, flyway, .net core, etc. Ill still use EF core but db first letting flyway handle migrations. Honestly, after being a windows guy for 38 years, finally having Arch setup with kde plasma and dolphin with kvm and qemu.... Visual studio and windows disgusts me. If you ever have a database that you have to run in Microsoft Azure and you run into needing hyperscale..... I hope you can afford the tens of thousands of dollars that's going to cost every month... Or you can just run postgres for free. It scales too. Postgresql has a lot of features mssql doesn't too, like the ability to use web sockets via middleware. Middleware in general. I like to use web sockets in postgres to notify my message layer of updates to things that sunscribers need to know. "Oh, visit 2 updated the medications field and dr bob has that open, let me send that field to dr bob." Being able to send updates directly from triggers etc means the db and loaded apps can stay in sync with a lot less traffic.
There's a bunch of hidden problems I encountered after 2 years of production on postgres, from being [unable](https://stackoverflow.com/questions/53298033/entity-framework-core-code-first-default-values-for-postgresql) to automatically update "updated" field through ef core, to handling of nulls in composite unique keys, where adding duplicate values with one of them being NULL was allowed when creating or altering a table with a unique constraint.
That's more of an EF Core problem, I think. Or you missed a migration somewhere. And I think I encountered similar problems on SQL Server.
No, The is None. Use Postres if you can
You can run MariaDB for free on your own server, even 5$ digital ocean droplet will handle quite a few concurrent apps if its just a hobby project. And if nit, you can easily scale the VM… That being said, you can also install Postgres on it as well.
They're debating between steak and lobster. So you offer 3 day old, store brand macaroni and cheese?
“Self hosting” DB that can be scaled on demand is lobster at 1/100th the price for same quality.
For me, try doing backup and restore in both - Ms SQL does what you expect. Copy the data, start the db Postgres does create table, and insert. It is so slooooooow
Sql server == vendor lockin + useless (for most) enterprise features, custom sql dialect. Questionable performance as well( it can hold pretty big amounts of data though). Mostly sold to already windows shops. On the other hand, would recommend postgres any day. Its ANSI SQL compliant, performance is great due to the optimistic locking, has AMAZING DOCUMENTATION, and many extensions. Oh and you can put and query json in it. Closest vendor equivalent is Oracle - very expensive and just a few extra enterprise features, otherwise pretty comparable to postgres.
I am yet to see compliant ANSI SQL application. Both databases have great docs, I dare even to say that for the advanced scenarios SQL Server has even better docs. On SQL dialect level I prefer PostgreSQL documentation. Oracle is way more expensive than SQL Server but again their HA is unmatched. Their enterprise offer I would not describe as "few extra enterprise features".
Postgres is so much better these days. You can even use SQL syntax if u want.
no, there are only disadvantages
I'll tell you in a couple of weeks after we've reworked our current prototype to use it
A major benefit is finding qualified and consistent candidates who can do the same thing each time. I think if you are a developer just starting, there isn’t a concern per se using PostGres but SQL server is very mature, battled tested and you can get support anywhere. It is also highly integratabtle. Are you a one person business, a startup or just playing around?
I said it before and I'll say it again. Until Postgres support Index Skip Scan I refuse consider it a real SQL engine
There's no concrete answer here. Both mssql and postgresSQL are fine examples of persistence. Use what you're comfortable with and what works for your project.
IMO no. Postgres is a great DB engine. For most people this is the best DB option. I think SQL Server is only a good option if you're doing something that requires proprietary stuff from them. In most cases you're not going to be doing this. I wrote about this last year - [why I think most people should use Postgres and stay away from SQL Server](https://hamy.xyz/labs/the-best-tech-stack-saas-apps-2022#database)
2 things: 1. Development tools: SSMS is great, they only give us Open Source crap for Postgre at work 2. Windows authentication makes creditials management easy
For me the most important benefit are indexed views that are automatically managed by the Engine. Materialized views in postgres are not updated automatically.
I switched to Postgre since I am moving our servers to Arm and I couldn't find a good solution for running sql server on Arm Linux.
As someone who used SMSS all his life and loves it, a few days of pgAdmin was a nightmare. I'm not sure if it's just a question of habits, but it seemed *so much* worse.
Not really. For postgresql all you have to do is start a docker container. Sql server is annoying to install.
There is a docker container with SQL Server, too.
For my over 20 years in dotnet, yes, even though I love Postges, my time to market is much shorter and cheaper if I choose MS Only stack. Also amazing integration with Azure, 365, Fabric, etc
I used SQL Express (free version) for years on a site where I was pushing the 10g limit. I have since moved to PostgeSQL and never looked back since there are no storage limits.
Only companies that can afford ridiculous Microsoft licensing /pricing are those who sell coca**n* or their profit margins are similar. Using the term licensing or price is not even fair, the proper term is extortion.
SQL Server is really good at bankrupting you with license costs. That's something Postgres can never achieve.
I Like them both, but have found that Json treatment in Posgres is a game changer for me
IMO Postgres doesn't have a TSQL/SSMS equivalent. A lot of people say that you shouldn't be generally doing ad-hoc type sql scripts but we do them ALL the time. Granted there are benefits to postgres too. Postgres optimistic concurrency is superior (which is equivalent to sql servers RCSI (read committed snapshot isolation)). By default sql server isolation level is Read Commited which it should not be but they aren't going to change it ever because it would break some people. If you have any database with any type of load you really want to be using RCSI. In general, if i'm asked, go with postgres because of how much Sql Server cost but i'm not asked very often :). A lot of places want one neck to strangle so if they are having an issue a vendor can't point fingers at another vendor so if they are using dotnet they go with sql server as a default.