8b75219cc369b7fb8228db0f05ef5ab1.ppt
- Количество слайдов: 51
Hosted by Making Reliable and Restorable Backups Presented by: W. Curtis Preston President The Storage Group, Inc.
Hosted by Making good on your investment l Many SANs are built in order to simplify backup, yet often fail for lack of good design, processes and procedures. l There are several common mistakes that people make when building a backup system l Avoiding these mistakes and taking proper action, can create a backup system that is reliable and restorable
Hosted by What will we cover? l Common Backup Configuration Mistakes l How to Avoid Them • • • Sizing your backup system Configuration examples for Net. Backup Configuration examples for Net. Worker
Hosted by Common Backup Configuration Mistakes
Hosted by Where do these lessons come from? l Audits of real backup and recovery systems l Lessons learned from real horror stories l Many, many sleepless nights
Hosted by Too little power l Not enough tape drives l Tape drives that aren’t fast enough l Not enough slots in the tape library l Not enough bandwidth to the server
Hosted by Too much power l Streaming tape drives must be streamed l If you don’t, you will wear out your tape drives and decrease aggregate performance l Must match the speed of the pipe to the speed of the tape l You can actually increase your throughput by using fewer tape drives
Hosted by Not using multiplexing l Defined: Sending multiple backup jobs to the same drive simultaneously l Again, drives must be streamed l Multiplexing will impact restore performance, but not as much as you might think l Multiplexing can actually help your restore just as it can help your backups l Using multiplexing can greatly increase the utilization of your backup hardware
Hosted by Not using multistreaming l Defined: Sending multiple simultaneous backup jobs from a single client l Large systems cannot be backed up serially l Multistreaming creates a different job for each filesystem
Hosted by Using include lists l Most major backup software supports file system discovery l Still, many administrators use manually created include lists l Any perceived value is significantly outweighed by the risk it creates
Hosted by Too many full backups l If you are using a commercial backup and recovery product with automated media management and multiple levels, weekly full backups are a waste of tape, time, and money l Monthly full backups, weekly cumulative incrementals (1), and daily incrementals (9) work just as well and use ¼ as much tape l Depending on the level of incremental activity, quarterly backups can work just as well.
Hosted by Not standardizing l Creating custom configurations for each client is easier, but much riskier l Creating a standard backup client configuration can significantly decrease risk l Create a standard exclude list, etc. and push it out to each client
Hosted by Not even noticing! l Backups go ignored so often. It’s like they’re the bill collector nobody wants to talk to l Backup reporting products can really help automate easy reporting l Don’t ignore backups. They will bite you.
Hosted by It’s just backups, right? l “I’m an experienced, seasoned systems administrator. This is just backups. How hard can they be? ” l The data being backed up has become very complex, and the complexity of backup systems have matched that complexity with functionality – that also happens to be complex
Hosted by Not thinking about disk l Tape is not as cheap as you thought l Let’s examine a 4 TB library 20 slots, 2 drives $17 K 20 tapes, $70 apiece $14 K Robotic license $10 K Total $41 K (does not include labor costs) l That’s about $10/GB
Hosted by Disk is cheaper than you thought l ATA-based storage arrays as low as $5/GB (disk only, needs filesystem) l Special function arrays • Quantum DX-30 looks and behaves like a Quantum P 1000. Can be used as target for “tape-based” backups (3 usable TB, $55 K list, or $18/GB) • Net. App R 100 looks like other Net. App filer. Target for Snap. Vault and disk-based backups, source for Snap. Mirror (9+ usable TB, $175 K list, or $18/GB) l ATA disks not suited for heavy, random access, but perfect for large block I/O (e. g. backups!)
Hosted by You can do neat things with disk l Incremental backups are one of the greatest backup performance challenges l Use as a target for all incremental backups. (Full, too, if you can afford it) l For off-site storage, duplicate all disk-based backups to tape l Leave disk-based backups on disk
Hosted by Now that I know… Building a reliable and restorable backup system
Hosted by Sizing the backup system
Hosted by Server Size/Power l l I/O performance more important than CPU power CPU, memory, I/O expandability paramount Avoid overbuying by testing prospective server under load If you use Suns, you’ve got snoop and truss
Hosted by Catalog/database Size l Determine number of files (n) l Determine number of days in cycle (d) l (A cycle is a full backup and its associated incremental backups. ) l Determine daily incremental size (i = n *. 02) l Determine number of cycles on-line (c) l 150 -250 bytes per file, per backup l Use a 1. 5 multiplier for growth and error l Index Size = (n + (i*d)) * c * 250 * 1. 5
Hosted by Library Size - drives l Network Backup • • • Buy twice as many backup drives as your network will support Use only as many drives as the network will support (You will get more with less. ) Use the other half of the drives for duplicating
Hosted by Library Size - drives l Local Backup • • Most large servers have enough I/O bandwidth to back themselves up within a reasonable time if you’re using Net. Backup Usually a simple matter of mathematics: § 8 hr window, 8 TBs = 1 TB/hr = 277 MB/s § 30 10 Mb/s drives, 15 20 MB/s drives • • l Must have sufficient bandwidth to tape drives Filesystem vs. raw recoveries Allow drives and time for duplicating!
Hosted by Library Size - slots (all tape environment) l l l Should hold all onsite tapes On-site tapes automatically expire and get reused Only offsite tapes require phys. mgmt. Should monitor library via a script to ensure that each pool has enough free tapes before you go home Watch for those downed drive messages
Hosted by Library Size - slots (disk/tape environment) l l l Do incremental backups to disk Library only needs to hold on-site full tapes and the latest set of copies. On-site tapes and disk-based backups automatically expire and get reused Only offsite tapes require phys. mgmt. Should monitor library and disk via a script to ensure that each pool has enough free tapes before you go home Watch for those downed drive messages
Hosted by Local or Remote Backup? l Throughput (in 8 hrs), if you “own the wire: ” • • l 10 Mb = 20 GB, 100 Mb = 200 GB Gb. E = 500 GB – 1 TB (Also must “own the box. ”) Greater than 500 GB should be “local” • Lan-free backups allow you to share a large tape library by performing “local” backups to a “remote, shared” device • • l More than one 500+ GBserver, buy a SAN! Only one 500+ GB server, plan for a SAN! (Net. Backup= SSO, Net. Worker=DDS)
Hosted by Multistreaming - Net. Backup Defined: Starting multiple simultaneous backup jobs from a single client • Maximum jobs per client > 1 • Check “Allow multiple data streams” • ALL_LOCAL_DRIVES, or multiple entries in file list • Maximum jobs per policy > 1 or unchecked • • • Need storage unit with more than one drive, or one drive with multiplexing enabled Can change max jobs per client using the Server Properties -> Clients tab (4. 5) By default, will not exceed one job per filesystem, but can bypass this if you make your own file list
Hosted by Multistreaming (Parallelism) - Net. Worker l Use “All” saveset or multiple entries in the saveset list l Set the parallelism setting for server and, if necessary, the storage node l Set client parallelism value in client attributes l Must have multiple drives available, or one drive with target sessions set higher than one l Will not exceed number of disks or logical volumes on the client (see maximum-sessions in manual)
Hosted by Multiplexing – Net. Worker l Set target sessions per device, allocating how many sessions may be sent to that device. l Global setting for all backups that go to that device
Hosted by Multiplexing - Net. Backup • Max multiplexing per drive in storage unit configuration > 1 • Media multiplexing in schedule > 1 § Use higher multiplexing for incremental backups if going to tape (6 -8) § § • Use lower multiplexing for local backups (2) No need to multiplex disk storage units Multiple policies can multiplex to the same drive, but multiple media servers cannot
Hosted by Using Include lists -- not l Net. Backup – ALL_LOCAL_DRIVES in file list l Net. Worker – All in saveset field l Automatically excludes NFS/CIFS drives l Does not include dynamically mounted drives not in /etc/*fstab
Hosted by What about database clients? l Use scripts that parse lists of databases: • • • /var/opt/oracle/oratab for Oracle MS-SQL list in registry Master database in Sybase l Some backup products support “All” for databases l Remember to write standardize script with parameters to backup databases.
Hosted by Incremental backups - Net. Backup l Create staggered monthly full backups using calendar-based scheduling l Create staggered weekly cumulative incrementals using CBS l Create daily incremental backups using frequency based backups l (Check Allow after run day. ) l Delete window from previous day for CBS
Hosted by Incremental backups - Net. Worker l Do not use the Default schedule! l Create 28 schedules with a monthly full, weekly level 1, and daily incremental, name them after the full day l Do not specify a schedule for the Group l Assign the 28 schedules evenly across all clients based on size
Hosted by Standardization – Net. Worker l Use All saveset entry l To exclude files, use standard directives for all clients
Hosted by Standardization - Net. Backup l Use ALL_LOCAL_DRIVES l Non-Windows clients Use standard exclude list and push out from master using bpgp l Windows clients – Use standard exclude list and push out from master using bpgetconfig –M and bpsetconfig –h
Hosted by Backup Reporting - Net. Backup l Watch activity and device monitors l bperror l bpdbjobs -report l bpdbjobs –report – all_columns l /usr/openv/netbackup/logs l /usr/openv/volmgr/logs
Hosted by Backup Reporting – Net. Worker l Watch nwadmin screens l mminfo l nsrinfo l mmlocate l nsrmm l /nsr/logs
Hosted by Disk-to-disk Backup - Net. Worker l If using regular disk, use file type device l Disk backup extra cost with options l If using virtual tape library, treat it like a tape library l Use cloning to duplicate disk-based backups to tape and send them off-site
Hosted by Disk-to-disk Backup - Net. Backup l If using regular disk, use disk-based storage unit l (No extra cost for disk storage units!) l If using virtual tape library, treat it like a tape library l Use vault to duplicate disk-based backups to tape and send them off-site
Hosted by What about my SAN and NAS?
Hosted by SAN: LAN-free, Client-free, and Server-free backup NAS: NDMP filer to self, filer to filer, filer to server, & server to filer
Hosted by LAN-free backups l How does this work? • SCSI Reserve/Release • Third-party queuing system l Levels of drive sharing l Restores
Hosted by How client-free backups work Backup transaction logs to disk Establish backup mirror Split backup mirror and back it up
Hosted by How client-free recoveries work Restore backup mirror from tape Restore primary mirror from backup mirror Replay transaction logs from disk
Hosted by Server-free backups l Server directs client to take a copy-on-write snapshot l Client and server record block and file associations l Server sends XCOPY request to SAN
Hosted by Server-less Restores l Changing block locations l Image level restores l File level restores
Hosted by NDMP Configurations l Filer to self l Filer to filer l Filer to server l Server to filer
Hosted by Using NDMP l Level of functionality depends on the DMA you choose • Robotic Support • Filer to Library Support • Filer to Server Support • Direct access restore support
Hosted by Resources
Hosted by Resources l Directories of products to help you make a better backup system http: //www. storagemountain. com l Send questions to: curtis@thestoragegroup. com
8b75219cc369b7fb8228db0f05ef5ab1.ppt