• I want to thank all the members that have upgraded your accounts. I truly appreciate your support of the site monetarily. Supporting the site keeps this site up and running as a lot of work daily goes on behind the scenes. Click to Support Signs101 ...

Always back up

njshorts

New Member
Have fun recovering an encrypted external once you get the click of death. A NAS with at least RAID 1 is a much better solution for onsite.

Online backup solutions aren't shared hosting which is what I assume you're referring to with the "same node as idiot" line of argument. Backup companies don't allow external execution of commands as a hosting company has to, the access levels are completely different. You should have both onsite and offsite for obvious reasons (fire, tornado, tsunami...) and as long as you do due diligence with your vendor, as you should for anything, you'll be fine.

Note that most security breaches that concern customers have to do with database hacks, which isn't possible with a good backup host (two-key passwords encrypted by additional keys, etc. not just a sql database of all customer passwords) or the above scenario (breaching older software on a shared host to gain root and access to all user accounts on that host).

In any case we're getting a bit too specific, a good backup scenario includes onsite and offsite solutions with multiple redundancies.

I agree with the first part, a rackable 2u with 4 1TB in RAID10 is the solution we chose, although it's overkill for most/all shops... but is relatively cheap. Like I said earlier, rsync from local storage to remote to have both local and remote storage, as it will cover nearly all scenarios that leave more than just cockroaches behind. I will never agree about cloud storage. I've worked in datacenters and for hosting companies and have seen the other side- I'll never trust it. Difference of opinion I guess.
 

Edserv

New Member
After SWEARING I would never lose data again, we just had a major storm and our server would not re-boot. We have clones on all our desk-tops and a double external back-up on our server, but our computer guy was "out of pocket" and I didn't have a policy/procedure manual in place. So I spent the better part of Thursday night/Friday morning trying to get the server to come back on line. Just when I was disconnecting everything to take to a local computer store, the server came back to life. I'm pretty sure we had everything, but just as a rule of thumb, even if you back up your data, and have a great MIS company, make sure you have an auditing system in place and policies on what to do if your system goes down. If we lost even a day's worth of data it would suck HARD.
As many times as I've been through this, and as many times as I thought I'd made plans, "BAM" and you're a day, a week, a month, or more? in the dark, trying to repair data.
Edserv
 

James Burke

Being a grandpa is more fun than working
As many times as I've been through this, and as many times as I thought I'd made plans, "BAM" and you're a day, a week, a month, or more? in the dark, trying to repair data.
Edserv

Sounds like it would do a person good to make a few "dry runs" to test their recovery system. I like the idea of developing and maintaining a protocol to follow, such as a list of files, their locations and frequency of backups.

Just for giggles, take some time and transfer all your stuff to another computer and make notes of any issues encountered.

When changing over to a new computer, I found quite a few holes in my recovery plans, but fortunately I had the old hard drive to fall back upon for safety. It's good to have these issues resolved before a real disaster strikes.

Thanks for such a timely subject.

JB
 
Top