Evaluating MongoDB Alternatives: FerretDB, Oracle, MariaDB
On first week of January 2024, Gunnar Morling introduced an entertaining challange that garnered considerable interest. What’s the quickest way to aggregate and summarise a billion rows of data? Challange was called as One Billion Row Challenge (1BRC in short), it’s designed for Java developers to explore fresh language features and optimization methods. I, too, find it immensely enjoyable. Since, i am not a Java developer myself, I simply followed along. Also Connor Mcdonald, Gerald Venzl, Robin Moffatt, Francesco Tisiot and numerous others have blogged about it from the perspective of database disciplines.
At the PGDay Ankara Conference on April 18th, 2024, Murat Tuncer, a software engineer from Microsoft, demonstrated how FerretDB, which utilizes a PostgreSQL backend, operates as a seamless alternative to MongoDB for all attendees. This aligns nicely with the prevalent mantra of recent times: “Just use Postgres for everything.”
As you’re all aware, Oracle Database 23ai was announced on May 3rd, 2024, boasting over 300 new features. Undoubtedly, the Oracle Database API for MongoDB is among them. Oracle Database understands Mongo-speak. That’s the purpose of Oracle Database API for MongoDB.
Indeed, Oracle Database API for MongoDB does sound like an unusual combination at first glance. Oracle Database is a traditional relational database management system (RDBMS), while MongoDB is a popular NoSQL database known for handling unstructured or semi-structured data.
Both FerretDB and the Oracle MongoDB API allow developers with MongoDB skills to write code in MongoDB syntax and store data in relational database management systems (RDBMS). This concept seems fascinating to me. I’m eager to delve deeper into it.
I’ve decided to put both FerretDB and the Oracle Database 23ai MongoDB API to the test to see if they can serve as alternatives to MongoDB in the realm of NoSQL databases. It’s important to note that this is not a benchmark test. My evaluation was conducted using almost default installations of both databases without any optimization. The purpose of the blog post is solely to showcase the solutions provided and compare their output to that of vanilla MongoDB.
I’ve created a straightforward bulk load script named “1mri_mongo.js,” which is provided below. For uploading data, I utilized my MacBook M1 with 16GB of RAM as a client, running Node.js version 22.0.0.
| const fs = require("fs"); | |
| const mongodb = require("mongodb").MongoClient; | |
| const fastimp = require("fast-csv"); | |
| // Start timing | |
| const startTime = process.hrtime(); | |
| // Mongodb connection string | |
| let url = "mongodb://insanedba:insane@192.168.1.31:27001/test"; | |
| let stream = fs.createReadStream("measurements.txt"); | |
| let statData = []; | |
| let fastream = fastimp | |
| .parse({delimiter: ';'}) | |
| .on("data", function(data) { | |
| statData.push({ | |
| station_name: data[0], | |
| measurement: data[1] | |
| }); | |
| }) | |
| .on("end", function() { | |
| mongodb.connect( | |
| url, | |
| { useNewUrlParser: true, useUnifiedTopology: true}, | |
| (err, client) => { | |
| if (err) throw err; | |
| client | |
| .db("test") | |
| .collection("posts") | |
| .insertMany(statData, (err, res) => { | |
| if (err) throw err; | |
| console.log(`Inserted: ${res.insertedCount} rows`); | |
| // End timing | |
| const endTime = process.hrtime(startTime); | |
| const elapsedSeconds = endTime[0]; | |
| const elapsedNanoseconds = endTime[1]; | |
| const elapsedTimeInMilliseconds = (elapsedSeconds * 1000) + (elapsedNanoseconds / 1000000); | |
| console.log(`Operation took: ${elapsedTimeInMilliseconds} milliseconds`); | |
| client.close(); | |
| }); | |
| } | |
| ); | |
| }); | |
| stream.pipe(fastream); |
The data was generated using the “create_measurements.sh” script from the original 1BRC repository. Below are the detailed steps involved. First, Fork the 1brc repo and clone it locally. Install SDKMAN, Java, Node, NVM (Node Version Manager), fast-csv, mongoose and fs npm packages on client machine.
| — Install SDKMAN | |
| osmandinc@192 ~ % curl -s "https://get.sdkman.io" | bash | |
| osmandinc@192 ~ % source "/Users/osmandinc/.sdkman/bin/sdkman-init.sh" | |
| osmandinc@192 ~ % sdk version | |
| … | |
| SDKMAN! | |
| script: 5.18.2 | |
| native: 0.4.6 | |
| — Install Java | |
| osmandinc@192 ~ % sdk install java 21.0.1-zulu | |
| osmandinc@192 ~ % sdk use java 21.0.1-zulu | |
| — Clone repo and prepare data | |
| osmandinc@192 ~ % cd Downloads/1brc-main | |
| osmandinc@192 1brc-main % ./mvnw clean verify | |
| [INFO] ———————————————————————— | |
| [INFO] BUILD SUCCESS | |
| [INFO] ———————————————————————— | |
| [INFO] Total time: 01:02 min | |
| [INFO] Finished at: 2024-05-02T21:24:11+03:00 | |
| [INFO] ———————————————————————— | |
| osmandinc@192 1brc-main % ./create_measurements.sh 1000000 | |
| osmandinc@192 1brc-main % ls -lsth measurements.txt | |
| 26952 -rw-r–r– 1 osmandinc staff 13M May 2 21:25 measurements.txt | |
| osmandinc@192 mongoapi % brew install node | |
| osmandinc@192 mongoapi % brew install nvm | |
| osmandinc@192 ~ % node –version | |
| v22.0.0 | |
| osmandinc@192 ~ % mkdir ~/.nvm | |
| osmandinc@192 ~ % vi .zprofile | |
| export NVM_DIR="$HOME/.nvm" | |
| # This loads nvm | |
| [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" | |
| # This loads nvm bash_completion | |
| [ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" | |
| osmandinc@192 ~ % npm i -S fast-csv | |
| osmandinc@192 ~ % npm i -S fs | |
| osmandinc@192 ~ % npm i -S path | |
| osmandinc@192 ~ % npm i -S moongose | |
| osmandinc@192 ~ % npm i -S dotenv |
I will use four different Virtual Machines on Oracle Linux Server release 8.7 for testing. I have set up port forwarding rules on my notebook (192.168.1.31) for these four VMs. Just to avoid confusion, I have listed them below.
| Host System:Port | Database Version | VM Host Name | Port |
| 192.168.1.31:27001 | MongoDB Community edition 7.0.9 | mongo01.localdomain (192.168.60.201) | 27001 |
| 192.168.1.31:27002 | FerretDB v1.21.0 and PostgreSQL 16.2 | ferret01.localdomain (192.168.60.202) | 27002 |
| 192.168.1.31:27003 | Oracle Database 23ai Free Version 23.4.0.24.05 | oradev01.localdomain (192.168.60.203) | 27003 |
| 192.168.1.31:27004 | NoSQL Protocol Module for MariaDB | maria01.localdomain (192.168.60.204) | 27004 |
1 Million Row Insert (1MRI) on MongoDB Community Edition
I will begin testing with MongoDB Community Edition 7.0.9 on Oracle Linux Server release 8.7. I have outlined simple MongoDB installation steps in a blog post titled ‘MongoDB Community Version Installation on Oracle Linux Server‘.
| osmandinc@192 mongoapi % node 1mri_mongo.js | |
| (node:23091) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead. | |
| (Use `node –trace-deprecation …` to show where the warning was created) | |
| Inserted: 1000000 rows | |
| Operation took: 21868.841958 milliseconds | |
| [root@mongo01 ~]# mongosh mongodb://insanedba:insane@192.168.1.31:27001/test | |
| Current Mongosh Log ID: 663681a98dee05e7f72202d7 | |
| Connecting to: mongodb://<credentials>@192.168.1.31:27001/test?directConnection=true&appName=mongosh+2.2.5 | |
| Using MongoDB: 7.0.9 | |
| Using Mongosh: 2.2.5 | |
| For mongosh info see: https://docs.mongodb.com/mongodb-shell/ | |
| test> db.posts.count(); | |
| DeprecationWarning: Collection.count() is deprecated. Use countDocuments or estimatedDocumentCount. | |
| 1000000 |
It took 20 seconds to upload 1 million rows on vanilla MongoDB using the fast-csv library and insertmany method.
1 Million Row Insert (1MRI) on FerretDB
FerretDB allows you to use MongoDB drivers seamlessly with PostgreSQL as the database backend.
Now i will test with FerretDB v1.21.0 and PostgreSQL 16.2 on Oracle Linux Server release 8.7.
I have setup a simple FerretDB with PostgreSQL as the database backend configured to listen on port 27002. The installation steps are provided in the blog post ‘FerretDB Installation using PostgreSQL as a database engine‘.
I will use script provided below.
| const fs = require("fs"); | |
| const mongodb = require("mongodb").MongoClient; | |
| const fastimp = require("fast-csv"); | |
| // Start timing | |
| const startTime = process.hrtime(); | |
| // Mongo API connection string | |
| let url = "mongodb://postgres:postgres@192.168.1.31:27002/ferretdb?authMechanism=PLAIN"; | |
| let stream = fs.createReadStream("measurements.txt"); | |
| let statData = []; | |
| let fastream = fastimp | |
| .parse({delimiter: ';'}) | |
| .on("data", function(data) { | |
| statData.push({ | |
| station_name: data[0], | |
| measurement: data[1] | |
| }); | |
| }) | |
| .on("end", function() { | |
| mongodb.connect( | |
| url, | |
| { useNewUrlParser: true, useUnifiedTopology: true}, | |
| (err, client) => { | |
| if (err) throw err; | |
| client | |
| .db("test") | |
| .collection("posts") | |
| .insertMany(statData, (err, res) => { | |
| if (err) throw err; | |
| console.log(`Inserted: ${res.insertedCount} rows`); | |
| // End timing | |
| const endTime = process.hrtime(startTime); | |
| const elapsedSeconds = endTime[0]; | |
| const elapsedNanoseconds = endTime[1]; | |
| const elapsedTimeInMilliseconds = (elapsedSeconds * 1000) + (elapsedNanoseconds / 1000000); | |
| console.log(`Operation took: ${elapsedTimeInMilliseconds} milliseconds`); | |
| client.close(); | |
| }); | |
| } | |
| ); | |
| }); | |
| stream.pipe(fastream); |
I will now insert 1000000 rows into FerretDB and assess its performance compared to vanilla MongoDB.
| osmandinc@192 mongoapi % node 1mri_ferret.js | |
| (node:25327) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead. | |
| (Use `node –trace-deprecation …` to show where the warning was created) | |
| Inserted: 1000000 rows | |
| Operation took: 65403.59325 milliseconds | |
| [root@mongo01 ~]# mongosh mongodb://postgres:postgres@192.168.60.202:27002/ferretdb?authMechanism=PLAIN | |
| Current Mongosh Log ID: 6636743d70a76b19852202d7 | |
| Connecting to: mongodb://<credentials>@192.168.60.202:27002/ferretdb?authMechanism=PLAIN&directConnection=true&appName=mongosh+2.2.5 | |
| Using MongoDB: 7.0.42 | |
| Using Mongosh: 2.2.5 | |
| For mongosh info see: https://docs.mongodb.com/mongodb-shell/ | |
| —— | |
| The server generated these startup warnings when booting | |
| 2024-05-04T17:45:33.177Z: Powered by FerretDB v1.21.0 and PostgreSQL 16.2 on x86_64-pc-linux-gnu, compiled by gcc. | |
| 2024-05-04T17:45:33.177Z: Please star us on GitHub: https://github.com/FerretDB/FerretDB. | |
| 2024-05-04T17:45:33.177Z: The telemetry state is undecided. | |
| 2024-05-04T17:45:33.177Z: Read more about FerretDB telemetry and how to opt out at https://beacon.ferretdb.com. | |
| —— | |
| ferretdb> use test; | |
| switched to db test | |
| test> db.posts.count(); | |
| 1000000 |
It took 65 seconds to upload 1 million rows on FerretDB. I will check the records in PostgreSQL database and connect to it using psql.
| [postgres@ferret01 data]$ psql -d ferretdb | |
| psql (16.2) | |
| Type "help" for help. | |
| — For a mongodb database – a schema created in postgresql | |
| — For a mongodb collection – a relation/table created in postgresql | |
| ferretdb=# SET schema 'test'; | |
| SET | |
| ferretdb=# \dt | |
| List of relations | |
| Schema | Name | Type | Owner | |
| ——–+—————————–+——-+———- | |
| test | _ferretdb_database_metadata | table | postgres | |
| test | posts_4c2edfdc | table | postgres | |
| (2 rows) | |
| ferretdb=# select count(*) from posts_4c2edfdc; | |
| count | |
| ——— | |
| 1000000 | |
| (1 row) |
Despite FerretDB being three times slower (65 seconds), we managed to seamlessly utilize a PostgreSQL database as the backend without altering a single code block in application code.
Note: PostgreSQL can achieve better results, particularly when the application code adheres to PostgreSQL standards, such as leveraging the COPY command. In contrast, FerretDB utilizes the ‘insert into’ command in the background. For this test, we examined whether the same code block could run without any modifications.
1 Million Row Insert (1MRI) on Oracle Database 23ai with Mongo API
This is my first time using Oracle Database 23ai in practice. I’m very curious about how it will perform. Let’s see.
I have installed Oracle Database 23ai on Oracle Linux Server release 8.7 using rpm method, configured a sample pluggable database for testing purposes and also installed Oracle Rest Data Service. After installing Oracle Rest Data Service, I have configured Mongo API on a custom port (27003) and created a test user. All steps are provided in my blog post ‘Mongo API Installation for Oracle Database‘.
I will use script provided below.
| const fs = require("fs"); | |
| const mongodb = require("mongodb").MongoClient; | |
| const fastimp = require("fast-csv"); | |
| // Start timing | |
| const startTime = process.hrtime(); | |
| // Oracle Database API for MongoDB connection string | |
| let url = "mongodb://insanedba:bjk@192.168.1.31:27003/insanedba?authMechanism=PLAIN&authSource=$external&tlsInsecure=true&ssl=true&retryWrites=false&loadBalanced=true"; | |
| let stream = fs.createReadStream("measurements.txt"); | |
| let statData = []; | |
| let fastream = fastimp | |
| .parse({delimiter: ';'}) | |
| .on("data", function(data) { | |
| statData.push({ | |
| station_name: data[0], | |
| measurement: data[1] | |
| }); | |
| }) | |
| .on("end", function() { | |
| mongodb.connect( | |
| url, | |
| { useNewUrlParser: true, useUnifiedTopology: true}, | |
| (err, client) => { | |
| if (err) throw err; | |
| client | |
| .db("insanedba") | |
| .collection("posts") | |
| .insertMany(statData, (err, res) => { | |
| if (err) throw err; | |
| console.log(`Inserted: ${res.insertedCount} rows`); | |
| // End timing | |
| const endTime = process.hrtime(startTime); | |
| const elapsedSeconds = endTime[0]; | |
| const elapsedNanoseconds = endTime[1]; | |
| const elapsedTimeInMilliseconds = (elapsedSeconds * 1000) + (elapsedNanoseconds / 1000000); | |
| console.log(`Operation took: ${elapsedTimeInMilliseconds} milliseconds`); | |
| client.close(); | |
| }); | |
| } | |
| ); | |
| }); | |
| stream.pipe(fastream); | |
I will now insert 1000000 rows using Oracle Database API for MongoDB, or Mongo API for short and assess its performance compared to vanilla MongoDB.
| osmandinc@192 mongoapi % node 1mri_oracle.js | |
| (node:28174) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead. | |
| (Use `node –trace-deprecation …` to show where the warning was created) | |
| Inserted: 1000000 rows | |
| Operation took: 64296.430708 milliseconds | |
| [root@mongo01 ~]# mongosh –tlsAllowInvalidCertificates 'mongodb://insanedba:bjk@192.168.60.203:27003/insanedba?authMechanism=PLAIN&authSource=$external&ssl=true&retryWrites=false&loadBalanced=true' | |
| Current Mongosh Log ID: 6636b47ccd6b23e9e02202d7 | |
| Connecting to: mongodb://<credentials>@192.168.60.203:27003/insanedba?authMechanism=PLAIN&authSource=%24external&ssl=true&retryWrites=false&loadBalanced=true&tlsAllowInvalidCertificates=true&appName=mongosh+2.2.5 | |
| Using MongoDB: 4.2.14 | |
| Using Mongosh: 2.2.5 | |
| For mongosh info see: https://docs.mongodb.com/mongodb-shell/ | |
| insanedba> db.posts.count(); | |
| DeprecationWarning: Collection.count() is deprecated. Use countDocuments or estimatedDocumentCount. | |
| 1000000 |
Despite Mongo API being three times slower (64 seconds), we managed to seamlessly utilize an Oracle database as the backend without altering a single code block in application code.
Note: Oracle Database can actually achieve better results, particularly when the application code adheres to Oracle standards, such as using sqlldr (SQL*Loader) with parallel option.
Update in June 2024:
1 Million Row Insert (1MRI) on NoSQL Protocol Module for MariaDB
Johan Wikman from MariaDB’s development team for the NoSQL protocol module reached out to me, asking if I could test MariaDB’s solution. At the time, I was unfamiliar with MariaDB’s offerings. Since I had not used MariaDB before, I was eager to try it out and see how it would perform. In Johan’s tests, it outperformed both Oracle Mongo API and FerretDB. Now i will also test it.
I have installed MariaDB 11.4.2 on Oracle Linux Server release 8.7 using rpm method, configured a minimal Maxscale configuration and configured NoSQL Protocol Module on a custom port (27004). All steps are provided in my blog post ‘NoSQL Protocol Module for MariaDB‘.
I will use script provided below.
| const fs = require("fs"); | |
| const mongodb = require("mongodb").MongoClient; | |
| const fastimp = require("fast-csv"); | |
| // Start timing | |
| const startTime = process.hrtime(); | |
| // Mongodb connection string | |
| let url = "mongodb://192.168.1.31:27004"; | |
| let stream = fs.createReadStream("measurements.txt"); | |
| let statData = []; | |
| let fastream = fastimp | |
| .parse({delimiter: ';'}) | |
| .on("data", function(data) { | |
| statData.push({ | |
| station_name: data[0], | |
| measurement: data[1] | |
| }); | |
| }) | |
| .on("end", function() { | |
| mongodb.connect( | |
| url, | |
| { useNewUrlParser: true, useUnifiedTopology: true}, | |
| (err, client) => { | |
| if (err) throw err; | |
| client | |
| .db("test") | |
| .collection("posts") | |
| .insertMany(statData, (err, res) => { | |
| if (err) throw err; | |
| console.log(`Inserted: ${res.insertedCount} rows`); | |
| // End timing | |
| const endTime = process.hrtime(startTime); | |
| const elapsedSeconds = endTime[0]; | |
| const elapsedNanoseconds = endTime[1]; | |
| const elapsedTimeInMilliseconds = (elapsedSeconds * 1000) + (elapsedNanoseconds / 1000000); | |
| console.log(`Operation took: ${elapsedTimeInMilliseconds} milliseconds`); | |
| client.close(); | |
| }); | |
| } | |
| ); | |
| }); | |
| stream.pipe(fastream); |
I will now insert 1000000 rows using NoSQL Protocol Module for MariaDB and assess its performance compared to vanilla MongoDB.
| osmandinc@192 mongoapi % node 1mri_maria.js | |
| (node:12520) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead. | |
| (Use `node –trace-deprecation …` to show where the warning was created) | |
| Inserted: 1000000 rows | |
| Operation took: 42442.361542 milliseconds | |
| [root@mongo01 ~]# mongosh –host 192.168.60.204 –port 27004 | |
| Current Mongosh Log ID: 665c9f36406e5818812202d7 | |
| Connecting to: mongodb://192.168.60.204:27004/?directConnection=true&appName=mongosh+2.2.5 | |
| Using MongoDB: 4.4.1 | |
| Using Mongosh: 2.2.5 | |
| mongosh 2.2.6 is available for download: https://www.mongodb.com/try/download/shell | |
| For mongosh info see: https://docs.mongodb.com/mongodb-shell/ | |
| test> db.posts.count(); | |
| DeprecationWarning: Collection.count() is deprecated. Use countDocuments or estimatedDocumentCount. | |
| 1000000 | |
| [root@maria01 ~]# mariadb | |
| Welcome to the MariaDB monitor. Commands end with ; or \g. | |
| Your MariaDB connection id is 37 | |
| Server version: 11.4.2-MariaDB MariaDB Server | |
| MariaDB [test]> show columns from posts; | |
| +——-+————-+——+—–+———+——————-+ | |
| | Field | Type | Null | Key | Default | Extra | | |
| +——-+————-+——+—–+———+——————-+ | |
| | id | varchar(35) | YES | UNI | NULL | VIRTUAL GENERATED | | |
| | doc | longtext | YES | | NULL | | | |
| +——-+————-+——+—–+———+——————-+ | |
| 2 rows in set (0.002 sec) |
In my environment, MariaDB’s NoSQL Protocol Module exhibited superior performance compared to both Oracle’s Mongo API and FerretDB. We seamlessly utilized MariaDB as the backend without needing to alter any application code.
Note: Credit is owed to Johan Wikman and his colleagues.
Results
Oracle, PostgreSQL and MariaDB are capable of understanding Mongo-speak. Oracle leverages ORDS, PostgreSQL utilizes FerretDB and MariaDB uses NoSQL protocol. As they are not native NoSQL databases, they all performed a little less efficiently compared to vanilla MongoDB.
However, it’s noteworthy that they’ve taken a significant step by not only meeting the criteria of RDBMS databases but also fulfilling the requirements of NoSQL databases.
I found it interesting that FerretDB is equipped with the latest version (MongoDB 7.0.42), whereas Oracle’s Database API for MongoDB (MongoDB 4.2.14) and MariaDB’s NoSQL Protocol for MariaDB (MongoDB 4.4.1) are based on older versions.
Hope it helps.


Leave your comment