首页 » MongoDB » Backup\Restore MongoDB using mongoexport, mongoimport, mongodump, mongorestore(MongoDB备份恢复)

Backup\Restore MongoDB using mongoexport, mongoimport, mongodump, mongorestore(MongoDB备份恢复)

The following examples , I will demonstrate how to restore a backup MongoDB. mongoexport to export mongodb to json,csv format file, mongorestore to backup mongodb to BSON(Binary Serialized dOcumeNt format) File.

1. Backup database with mongoexport

Review some of the common use options.
[mongo@db231 ~]$ mongoexport –help
Export MongoDB data to CSV, TSV or JSON files.

–dbpath arg

directly access mongod database files in the given path, instead of connecting to a mongod server – needs to lock the data directory, so cannot be used if a mongod is currently

# in the case dbpath is /data/db. following is export directly . Backup by Shutting down Mongod Instance

[mongo@db231 ~]$ mongoexport --dbpath=/data/db -o all.json

If you are running a mongod on the same path you should connect to that instead of direct data file access

2014-05-04T10:34:51.843+0800 [tools] dbexit: 
2014-05-04T10:34:51.844+0800 [tools] shutdown: going to close listening sockets...
2014-05-04T10:34:51.844+0800 [tools] shutdown: going to flush diaglog...
2014-05-04T10:34:51.844+0800 [tools] shutdown: going to close sockets...
2014-05-04T10:34:51.844+0800 [tools] shutdown: waiting for fs preallocator...
2014-05-04T10:34:51.844+0800 [tools] shutdown: closing all files...
2014-05-04T10:34:51.844+0800 [tools] closeAllFiles() finished
2014-05-04T10:34:51.844+0800 [tools] dbexit: really exiting now


[mongo@db231 ~]$ ps -ef|grep mongod| grep -v grep
mongo     4867     1  0 Apr30 ?        00:07:28 mongod --config /etc/mongodb.conf

[mongo@db231 ~]$ mongod --shutdown
killing process with pid: 4867

[mongo@db231 ~]$ ps -ef|grep mongod
mongo    16802 16736  0 10:37 pts/1    00:00:00 grep mongod

[mongo@db231 ~]$ mongoexport --dbpath=/data/db -o all.json
no collection specified!
Export MongoDB data to CSV, TSV or JSON files.

[mongo@db231 ~]$ cd /data/db
[mongo@db231 db]$ ls
admin.0  admin.ns  journal  local.0  local.ns  log  mongod.lock  test.0  test.ns

# check some tables name from mongoDB datafile, using strings Utility on LINUX. in the case I will export 'system.users' collections of 'admin' db.

[mongo@db231 db]$ strings admin.ns |tail -100  

[mongo@db231 db]$ mongoexport -d admin -c system.users  --dbpath=/data/db -o all.json           
exported 2 records
2014-05-04T10:45:35.289+0800 [tools] dbexit: 
2014-05-04T10:45:35.289+0800 [tools] shutdown: going to close listening sockets...
2014-05-04T10:45:35.289+0800 [tools] shutdown: going to flush diaglog...
2014-05-04T10:45:35.290+0800 [tools] shutdown: going to close sockets...
2014-05-04T10:45:35.290+0800 [tools] shutdown: waiting for fs preallocator...
2014-05-04T10:45:35.290+0800 [tools] shutdown: closing all files...
2014-05-04T10:45:35.290+0800 [tools] closeAllFiles() finished
2014-05-04T10:45:35.290+0800 [tools] shutdown: removing fs lock...
2014-05-04T10:45:35.290+0800 [tools] dbexit: really exiting now

[mongo@db231 db]$ cat  all.json 
{ "_id" : "admin.root", "user" : "root", "db" : "admin", "credentials" : { "MONGODB-CR" : "7bc9aa6753e5241290fd85fece372bd8" }, "roles" : [ { "role" : "root
", "db" : "admin" } ] }
{ "_id" : "test.anbob", "user" : "anbob", "db" : "test", "credentials" : { "MONGODB-CR" : "870c3c636f8f34ab73c5974df971190f" }, "roles" : [ { "role" : "clus
terAdmin", "db" : "admin" }, { "role" : "readAnyDatabase", "db" : "admin" }, { "role" : "readWrite", "db" : "test" } ] }

# Export all documents with fields "user" and "roles"  only.
[mongo@db231 db]$ mongoexport -d admin -c system.users -f "user,roles" --dbpath=/data/db -o all.json    
exported 2 records
..

[mongo@db231 db]$ cat all.json 
{ "_id" : "admin.root", "user" : "root", "roles" : [ { "role" : "root", "db" : "admin" } ] }
{ "_id" : "test.anbob", "user" : "anbob", "roles" : [ { "role" : "clusterAdmin", "db" : "admin" }, { "role" : "readAnyDatabase", "db" : "admin" }, { "role" : "readWrite", "db" : "test" } ] }

# Export all documents with a search query, in this case, only document with "user=anbob" will be exported.
[mongo@db231 db]$ mongoexport -d admin -c system.users  -q "{user:'anbob'}" --dbpath=/data/db -o all.json
exported 1 records

[mongo@db231 db]$ cat all.json 
{ "_id" : "test.anbob", "user" : "anbob", "db" : "test", "credentials" : { "MONGODB-CR" : "870c3c636f8f34ab73c5974df971190f" }, "roles" : [ { "role" : "clusterAdmin", "db" : "admin" }, { "role" : "readAnyDatabase", "db" : "admin" }, { "role" : "readWrite", "db" : "test" } ] }

# Export all documents format CSV format.

[mongo@db231 db]$ mongoexport -d admin -c system.users --csv  -q "{user:'anbob'}" --dbpath=/data/db -o all.json
csv mode requires a field list

[mongo@db231 db]$ mongoexport -d admin -c system.users -f "user,roles" --csv --dbpath=/data/db -o all.json
exported 2 records

[mongo@db231 db]$ cat all.json 
user,roles
"root","[ { ""role"" : ""root"", ""db"" : ""admin"" } ]"
"anbob","[ { ""role"" : ""clusterAdmin"", ""db"" : ""admin"" }, { ""role"" : ""readAnyDatabase"", ""db"" : ""admin"" }, { ""role"" : ""readWrite"", ""db"" : ""test"" } ]"


# Next , I export without directly feature. then start up mongod process.

[mongo@db231 db]$ mongod -f /etc/mongodb.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 17003
child process started successfully, parent exiting

[mongo@db231 db]$ ps -ef|grep mongod          
mongo    17003     1  1 10:58 ?        00:00:00 mongod -f /etc/mongodb.conf
mongo    17017 16873  0 10:58 pts/1    00:00:00 grep mongod

[mongo@db231 db]$ mongo 192.168.168.231/admin -u root -p mongo   
MongoDB shell version: 2.6.0
connecting to: 192.168.168.231/admin
> show collections;
system.indexes
system.users
system.version
> exit
bye

[mongo@db231 db]$ mongoexport -h 192.168.168.231  -u root -p mongo -d admin -c system.users -o u.json
connected to: 192.168.168.231
exported 2 records

[mongo@db231 db]$ cat u.json 
{ "_id" : "admin.root", "user" : "root", "db" : "admin", "credentials" : { "MONGODB-CR" : "7bc9aa6753e5241290fd85fece372bd8" }, "roles" : [ { "role" : "root", "db" : "admin" } ] }
{ "_id" : "test.anbob", "user" : "anbob", "db" : "test", "credentials" : { "MONGODB-CR" : "870c3c636f8f34ab73c5974df971190f" }, "roles" : [ { "role" : "clusterAdmin", "db" : "admin" }, { "role" : "readAnyDatabase", "db" : "admin" }, { "role" : "readWrite", "db" : "test" } ] }

2. Restore database with mongoimport

# Imports all documents from file u.json(above generated) into database.collection named "test.impuser". All non-exists databases or collections will be created automatically.

[mongo@db231 db]$ mongoimport -h 192.168.168.231 -u anbob -p mongo -d test -c impuser --file u.json
connected to: 192.168.168.231
2014-05-04T11:06:09.853+0800 imported 2 objects

[mongo@db231 db]$ mongo  192.168.168.231/test -u anbob -p mongo  
MongoDB shell version: 2.6.0
connecting to: 192.168.168.231/test
> show collections;
fs.chunks
fs.files
impuser
system.indexes
testtab

> db.impuser.find();
{ "_id" : "admin.root", "user" : "root", "db" : "admin", "credentials" : { "MONGODB-CR" : "7bc9aa6753e5241290fd85fece372bd8" }, "roles" : [ { "role" : "root", "db" : "admin" } ] }
{ "_id" : "test.anbob", "user" : "anbob", "db" : "test", "credentials" : { "MONGODB-CR" : "870c3c636f8f34ab73c5974df971190f" }, "roles" : [ { "role" : "clusterAdmin", "db" : "admin" }, { "role" : "readAnyDatabase", "db" : "admin" }, { "role" : "readWrite", "db" : "test" } ] }

# update some data.

> db.impuser.update({"user":"root"},{$inc:{"logins":10}})
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })

> db.impuser.find({"user":"root"});
{ "_id" : "admin.root", "user" : "root", "db" : "admin", "credentials" : { "MONGODB-CR" : "7bc9aa6753e5241290fd85fece372bd8" }, "roles" : [ { "role" : "root", "db" : "admin" } ], "logins" : 10 }

> db.impuser.update({"user":"root"},{$set:{"roles":""}});
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })
> db.impuser.find();
{ "_id" : "admin.root", "user" : "root", "db" : "admin", "credentials" : { "MONGODB-CR" : "7bc9aa6753e5241290fd85fece372bd8" }, "roles" : "", "logins" : 10 }
{ "_id" : "test.anbob", "user" : "anbob", "db" : "test", "credentials" : { "MONGODB-CR" : "870c3c636f8f34ab73c5974df971190f" }, "roles" : [ { "role" : "clusterAdmin", "db" : "admin" }, { "role" : "readAnyDatabase", "db" : "admin" }, { "role" : "readWrite", "db" : "test" } ] }
> exit
bye

# Import again , normal if objects is exists, Never update or  append without --opsert option.

[mongo@db231 db]$ mongoimport -h 192.168.168.231 -u anbob -p mongo -d test -c impuser --file u.json
connected to: 192.168.168.231
2014-05-04T11:17:28.379+0800 insertDocument :: caused by :: 11000 E11000 duplicate key error index: test.impuser.$_id_  dup key: { : "admin.root" }
2014-05-04T11:17:28.380+0800 insertDocument :: caused by :: 11000 E11000 duplicate key error index: test.impuser.$_id_  dup key: { : "test.anbob" }
2014-05-04T11:17:28.380+0800 imported 2 objects
[mongo@db231 db]$ mongo  192.168.168.231/test -u anbob -p mongo                                    
MongoDB shell version: 2.6.0
connecting to: 192.168.168.231/test
> db.impuser.find();
{ "_id" : "admin.root", "user" : "root", "db" : "admin", "credentials" : { "MONGODB-CR" : "7bc9aa6753e5241290fd85fece372bd8" }, "roles" : "", "logins" : 10 }
{ "_id" : "test.anbob", "user" : "anbob", "db" : "test", "credentials" : { "MONGODB-CR" : "870c3c636f8f34ab73c5974df971190f" }, "roles" : [ { "role" : "clusterAdmin", "db" : "admin" }, { "role" : "readAnyDatabase", "db" : "admin" }, { "role" : "readWrite", "db" : "test" } ] }
> exit
bye

#Imports all documents, insert or update objects that already exist (based on the _id).

[mongo@db231 db]$ mongoimport -h 192.168.168.231 -u anbob -p mongo -d test -c impuser --file u.json --upsert
connected to: 192.168.168.231
2014-05-04T11:18:24.932+0800 imported 2 objects
[mongo@db231 db]$ mongo  192.168.168.231/test -u anbob -p mongo                                       MongoDB shell version: 2.6.0
connecting to: 192.168.168.231/test
> db.impuser.find();
{ "_id" : "admin.root", "user" : "root", "db" : "admin", "credentials" : { "MONGODB-CR" : "7bc9aa6753e5241290fd85fece372bd8" }, "roles" : [ { "role" : "root", "db" : "admin" } ] }
{ "_id" : "test.anbob", "user" : "anbob", "db" : "test", "credentials" : { "MONGODB-CR" : "870c3c636f8f34ab73c5974df971190f" }, "roles" : [ { "role" : "clusterAdmin", "db" : "admin" }, { "role" : "readAnyDatabase", "db" : "admin" }, { "role" : "readWrite", "db" : "test" } ] }
> ^C
bye

3. Backup database with mongodump

mongodump is an useful tool to backup Mongo database. Apart from taking a cold backup, it can also take hot backup. i.e You can connect to a running instance of MongoDB and take a backup even when users are using the database like mongoexport.

[mongo@db231 bin]$ mongodump –help
Export MongoDB data to BSON files.

-h [ –host ] arg mongo host to connect to ( /s1,s2 for sets)
–port arg server port. Can also use –host hostname:port
-u [ –username ] arg username
-p [ –password ] arg password
–dbpath arg
directly access mongod database files in the given path, instead of connecting to a mongod server – needs to lock the data directory, so cannot be used if a mongod is currently accessing the same path each db is in a separate directory (relevant only if dbpath specified)
–journal
enable journaling (relevant only if dbpath specified)

-d [ –db ] arg database to use
-c [ –collection ] arg collection to use (some commands)
-o [ –out ] arg (=dump) output directory or “-” for stdout
-q [ –query ] arg json query
–oplog Use oplog for point-in-time snapshotting
–repair try to recover a crashed database

# Backup by Shutting down Mongod Instance

[mongo@db231 ~]$ mongodump --version
 version 2.6.0

[mongo@db231 ~]$ mkdir backup
[mongo@db231 ~]$ cd backup

[mongo@db231 backup]$ mongod --shutdown
killing process with pid: 17003

[mongo@db231 backup]$ mongodump --dbpath=/data/db
2014-05-04T13:19:39.562+0800 all dbs
2014-05-04T13:19:39.760+0800 [tools] command admin.$cmd command: listDatabases { listDatabases: 1 } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) R:2 W:197303 r:22 reslen:227 197ms
2014-05-04T13:19:39.760+0800 DATABASE: test      to     dump/test
2014-05-04T13:19:39.762+0800    test.system.indexes to dump/test/system.indexes.bson
2014-05-04T13:19:39.763+0800             6 documents
2014-05-04T13:19:39.763+0800    test.fs.files to dump/test/fs.files.bson
2014-05-04T13:19:39.763+0800             0 documents
2014-05-04T13:19:39.763+0800    Metadata for test.fs.files to dump/test/fs.files.metadata.json
2014-05-04T13:19:39.763+0800    test.fs.chunks to dump/test/fs.chunks.bson
2014-05-04T13:19:39.770+0800             0 documents
2014-05-04T13:19:39.770+0800    Metadata for test.fs.chunks to dump/test/fs.chunks.metadata.json
2014-05-04T13:19:39.770+0800    test.testtab to dump/test/testtab.bson
2014-05-04T13:19:39.770+0800             1 documents
2014-05-04T13:19:39.770+0800    Metadata for test.testtab to dump/test/testtab.metadata.json
2014-05-04T13:19:39.770+0800    test.impuser to dump/test/impuser.bson
2014-05-04T13:19:39.770+0800             2 documents
2014-05-04T13:19:39.770+0800    Metadata for test.impuser to dump/test/impuser.metadata.json
2014-05-04T13:19:39.770+0800 DATABASE: admin     to     dump/admin
2014-05-04T13:19:39.771+0800    admin.system.indexes to dump/admin/system.indexes.bson
2014-05-04T13:19:39.771+0800             3 documents
2014-05-04T13:19:39.771+0800    admin.system.version to dump/admin/system.version.bson
2014-05-04T13:19:39.771+0800             1 documents
2014-05-04T13:19:39.771+0800    Metadata for admin.system.version to dump/admin/system.version.metadata.json
2014-05-04T13:19:39.771+0800    admin.system.users to dump/admin/system.users.bson
2014-05-04T13:19:39.771+0800             2 documents
2014-05-04T13:19:39.771+0800    Metadata for admin.system.users to dump/admin/system.users.metadata.json
2014-05-04T13:19:39.772+0800 [tools] dbexit: 
2014-05-04T13:19:39.772+0800 [tools] shutdown: going to close listening sockets...
2014-05-04T13:19:39.772+0800 [tools] shutdown: going to flush diaglog...
2014-05-04T13:19:39.772+0800 [tools] shutdown: going to close sockets...
2014-05-04T13:19:39.772+0800 [tools] shutdown: waiting for fs preallocator...
2014-05-04T13:19:39.772+0800 [tools] shutdown: closing all files...
2014-05-04T13:19:39.772+0800 [tools] closeAllFiles() finished
2014-05-04T13:19:39.772+0800 [tools] shutdown: removing fs lock...
2014-05-04T13:19:39.772+0800 [tools] dbexit: really exiting now

TIP:
In the above examples, mongodump created a dump directory under the current directory from where the command was executed.

[mongo@db231 backup]$ ll
total 4
drwxrwxr-x 4 mongo mongo 4096 May  4 13:19 dump
[mongo@db231 backup]$ cd dump/
[mongo@db231 dump]$ ls
admin  test
[mongo@db231 dump]$ cd test
[mongo@db231 test]$ ls
fs.chunks.bson           fs.files.bson           impuser.bson           system.indexes.bson  testtab.metadata.json
fs.chunks.metadata.json  fs.files.metadata.json  impuser.metadata.json  testtab.bson

# Backup without Shutting down Mongod Instance

[mongo@db231 test]$ mongod -f /etc/mongodb.conf 
about to fork child process, waiting until server is ready for connections.
forked process: 17386
child process started successfully, parent exiting


[mongo@db231 ~]$ mongodump --host 192.168.168.231 --db test --username anbob --password mongo --out /home/mongo/backup2
connected to: 192.168.168.231
2014-05-04T13:33:06.034+0800 DATABASE: test      to     /home/mongo/backup2/test
2014-05-04T13:33:06.034+0800    test.system.indexes to /home/mongo/backup2/test/system.indexes.bson
2014-05-04T13:33:06.035+0800             6 documents
2014-05-04T13:33:06.035+0800    test.fs.files to /home/mongo/backup2/test/fs.files.bson
2014-05-04T13:33:06.035+0800             0 documents
2014-05-04T13:33:06.035+0800    Metadata for test.fs.files to /home/mongo/backup2/test/fs.files.metadata.json
2014-05-04T13:33:06.036+0800    test.fs.chunks to /home/mongo/backup2/test/fs.chunks.bson
2014-05-04T13:33:06.036+0800             0 documents
2014-05-04T13:33:06.036+0800    Metadata for test.fs.chunks to /home/mongo/backup2/test/fs.chunks.metadata.json
2014-05-04T13:33:06.036+0800    test.testtab to /home/mongo/backup2/test/testtab.bson
2014-05-04T13:33:06.036+0800             1 documents
2014-05-04T13:33:06.037+0800    Metadata for test.testtab to /home/mongo/backup2/test/testtab.metadata.json
2014-05-04T13:33:06.037+0800    test.impuser to /home/mongo/backup2/test/impuser.bson
2014-05-04T13:33:06.037+0800             2 documents
2014-05-04T13:33:06.037+0800    Metadata for test.impuser to /home/mongo/backup2/test/impuser.metadata.json

Tip:
If the mongo instance has multiple database (for example, mongodev and mongoprod), execute the mongodump command couple of time as shown below to backup both the database.

# mongodump --db mongodevdb --username mongodevdb --password YourSecretPwd

# mongodump --db mongoproddb --username mongoproddb --password YourSecretProdPwd

TIP:
Instead of backing up all the collections in a particular database, you can also backup specific collections. using –collection option.

4. Restore database with mongodump

# Restore  Database without Mongod Instance

[mongo@db231 db]$ mongo 192.168.168.231 -u anbob -p mongo
MongoDB shell version: 2.6.0
connecting to: 192.168.168.231/test
> show dbs;
admin  0.078GB
local  0.078GB
test   0.078GB
> use test
switched to db test

> db.dropDatabase();
{ "dropped" : "test", "ok" : 1 }

> show dbs;
admin  0.078GB
local  0.078GB
> exit
bye

[mongo@db231 db]$ mongod --shutdown
killing process with pid: 17482
[mongo@db231 db]$ ps -ef|grep mongod|grep -v grep
[mongo@db231 db]$ cd /backup/

[mongo@db231 backup]$ ls
dump
[mongo@db231 backup]$ mongorestore --dbpath /data/db --db test dump/test
2014-05-04T13:50:56.929+0800 dump/test/impuser.bson
2014-05-04T13:50:56.929+0800    going into namespace [test.impuser]
2014-05-04T13:50:56.932+0800 [FileAllocator] allocating new datafile /data/db/test.ns, filling with zeroes...
2014-05-04T13:50:56.932+0800 [FileAllocator] creating directory /data/db/_tmp
2014-05-04T13:50:57.013+0800 [FileAllocator] done allocating datafile /data/db/test.ns, size: 16MB,  took 0.08 secs
2014-05-04T13:50:57.016+0800 [FileAllocator] allocating new datafile /data/db/test.0, filling with zeroes...
2014-05-04T13:50:57.334+0800 [FileAllocator] done allocating datafile /data/db/test.0, size: 64MB,  took 0.317 secs
2014-05-04T13:50:57.335+0800 [tools] build index on: test.impuser properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "test.impuser" }
2014-05-04T13:50:57.335+0800 [tools]     added index to empty collection
2014-05-04T13:50:57.335+0800 [tools] insert test.impuser ninserted:1 keyUpdates:0 numYields:0 locks(micros) w:403396 403ms
..

[mongo@db231 backup]$ mongod -f /etc/mongodb.conf            
about to fork child process, waiting until server is ready for connections.
forked process: 17524
child process started successfully, parent exiting


[mongo@db231 backup]$ mongo 192.168.168.231 -u anbob -p mongo
MongoDB shell version: 2.6.0
connecting to: 192.168.168.231/test
> show dbs
admin  0.078GB
local  0.078GB
test   0.078GB
> use test
switched to db test
> show collections;
fs.chunks
fs.files
impuser
system.indexes
testtab
> 

Tip:
In the above examples, mongorestore will perform a merge if it sees that the database already exists. it is giving a warning message for every collection that it is trying to restore, but is already present in the destination database.

If you want a clean restore, use the –drop option. If a collection that exist in the backup also exist in the destination database, mongorestore command will now drop that collection, and restore the one from the backup.

However you can also restore a mongo backup to a mongodb instance running on a different server.

in the following example, it restores the mongo database backup located to the mongodb instance running on 192.168.168.230 server.

mongorestore --host 192.168.168.230 --port 3017 --db test --username anbob --password anbob --drop /home/mongo/backup/dump

Related Posts:

打赏

对不起,这篇文章暂时关闭评论。