Discussion:
10TB Hard Drive, can't even be accessed by modern OS's yet!
(too old to reply)
Yousuf Khan
2015-03-14 04:36:55 UTC
Permalink
Sadly This 10TB Hard Drive Is Designed For Servers, Not Your Laptop
http://gizmodo.com/sadly-this-10tb-hard-drive-is-designed-for-servers-not-1691245306

Hitachi Global Storage Technologies—aka HGST, aka a subsidiary of
Western Digital—was recently showing off its gigantic new 10TB hard
drive at the Linux Foundation Vault tradeshow in Boston. But
unfortunately you won't be packing 10,000 gigabytes into your laptop
anytime soon because the drive is designed for use in servers, and
mostly because it requires special software to work.

Originally revealed back in September of last year, HGST will finally be
shipping its 10TB SMR HelioSeal HDD sometime in the second quarter of
this year. But the drive will require special updates to an OS like
Linux in order for a server to actually be able to read and write to it
thanks to the radical new storage technologies it employs.

The HelioSeal technology simply means the drive is actually pumped full
of helium to help reduce friction between the read/write heads and the
platter which allows HGST to squeeze more platters inside since there's
less heat to have to deal with. It's the SMR technology that poses the
software problems.

SMR stands for Shingled Magnetic Recording and it basically describes
how data is written to the platters. In a traditional hard drive the
data is written in thin lines with a tiny gap in-between each one to
help minimize corruption. It's similar to how grooves of music are laid
out on a vinyl record. With SMR those data tracks slightly overlap
instead, like waterproof shingles on the roof of a home. There are no
longer any gaps in-between each track which allows more data to be
stored on a single platter, but at the cost of more complicated software
on the OS to properly read, write, and over-write data without
destroying neighboring tracks.

It sounds complicated, and it is, which is why HGST's new 10TB drive has
been slow to come to market. Everyone involved wants to make sure the
technology and supporting software works perfectly to avoid disastrous
data loss. But there's no reason to think the technology won't be ready
for desktop PCs and eventually laptops in a few years. Who needs that
cloud anyways?
Paul
2015-03-14 09:11:54 UTC
Permalink
Post by Yousuf Khan
Sadly This 10TB Hard Drive Is Designed For Servers, Not Your Laptop
http://gizmodo.com/sadly-this-10tb-hard-drive-is-designed-for-servers-not-1691245306
Hitachi Global Storage Technologies—aka HGST, aka a subsidiary of
Western Digital—was recently showing off its gigantic new 10TB hard
drive at the Linux Foundation Vault tradeshow in Boston. But
unfortunately you won't be packing 10,000 gigabytes into your laptop
anytime soon because the drive is designed for use in servers, and
mostly because it requires special software to work.
Originally revealed back in September of last year, HGST will finally be
shipping its 10TB SMR HelioSeal HDD sometime in the second quarter of
this year. But the drive will require special updates to an OS like
Linux in order for a server to actually be able to read and write to it
thanks to the radical new storage technologies it employs.
The HelioSeal technology simply means the drive is actually pumped full
of helium to help reduce friction between the read/write heads and the
platter which allows HGST to squeeze more platters inside since there's
less heat to have to deal with. It's the SMR technology that poses the
software problems.
SMR stands for Shingled Magnetic Recording and it basically describes
how data is written to the platters. In a traditional hard drive the
data is written in thin lines with a tiny gap in-between each one to
help minimize corruption. It's similar to how grooves of music are laid
out on a vinyl record. With SMR those data tracks slightly overlap
instead, like waterproof shingles on the roof of a home. There are no
longer any gaps in-between each track which allows more data to be
stored on a single platter, but at the cost of more complicated software
on the OS to properly read, write, and over-write data without
destroying neighboring tracks.
It sounds complicated, and it is, which is why HGST's new 10TB drive has
been slow to come to market. Everyone involved wants to make sure the
technology and supporting software works perfectly to avoid disastrous
data loss. But there's no reason to think the technology won't be ready
for desktop PCs and eventually laptops in a few years. Who needs that
cloud anyways?
http://www.storagereview.com/seagate_archive_hdd_review_8tb

"SMR drives are not designed to cope with sustained write behavior"

"We found large sustained backup tasks to take longer than a traditional
PMR HDD, averaging about 30MB/s"

"The SMR drives took much longer for a traditional full backup,
averaging 30MB/s.

However we saw sustained read speeds during a 400GB VM recovery
in excess of 180MB/s, which is really the core metric.
"

It's possible the slightly smaller drives are not affected
like that. Some of the 6TB ones are OK. I saw a review comparing
a few products in the 6TB range, and they had decent sustained
numbers (for home users who care about such things).

This isn't that review, but it'll have to do. It's for a
Seagate 6TB drive, with numbers over 200MB/sec for large
enough block size operations.

http://www.overclockersclub.com/reviews/seagate_enterprise_capacity_6tb_35_hdd_v4_review/4.htm

If you have external disk enclosures, some of the new disks
have a different hole pattern on the bottom, so for drives
that are held into place with screws from the bottom,
only two screws will mate.

I think home users will be staying "one step behind the curve",
to get the best possible secondary storage. SSD for C:,
conventional (non-SMR) secondary storage.

The flying height, the last time I checked, was 3nm. HGST is
experimenting with zero flying height. If you thought your
old drives seemed to have a "wear phenomenon", we're just
getting started. The experimental zero flying height setup
HGST used, lasted one month before the head was ruined. But
they'll figure it out eventually.

I'm crossing my fingers and hoping my current set of
drives last a long time. I'm very happy to not have
30MB/sec writes.

Paul
Drew
2015-03-14 12:39:04 UTC
Permalink
Post by Paul
Post by Yousuf Khan
Sadly This 10TB Hard Drive Is Designed For Servers, Not Your Laptop
http://gizmodo.com/sadly-this-10tb-hard-drive-is-designed-for-servers-not-1691245306
Hitachi Global Storage Technologies—aka HGST, aka a subsidiary of
Western Digital—was recently showing off its gigantic new 10TB hard
drive at the Linux Foundation Vault tradeshow in Boston. But
unfortunately you won't be packing 10,000 gigabytes into your laptop
anytime soon because the drive is designed for use in servers, and
mostly because it requires special software to work.
Originally revealed back in September of last year, HGST will finally
be shipping its 10TB SMR HelioSeal HDD sometime in the second quarter
of this year. But the drive will require special updates to an OS like
Linux in order for a server to actually be able to read and write to
it thanks to the radical new storage technologies it employs.
The HelioSeal technology simply means the drive is actually pumped
full of helium to help reduce friction between the read/write heads
and the platter which allows HGST to squeeze more platters inside
since there's less heat to have to deal with. It's the SMR technology
that poses the software problems.
SMR stands for Shingled Magnetic Recording and it basically describes
how data is written to the platters. In a traditional hard drive the
data is written in thin lines with a tiny gap in-between each one to
help minimize corruption. It's similar to how grooves of music are
laid out on a vinyl record. With SMR those data tracks slightly
overlap instead, like waterproof shingles on the roof of a home. There
are no longer any gaps in-between each track which allows more data to
be stored on a single platter, but at the cost of more complicated
software on the OS to properly read, write, and over-write data
without destroying neighboring tracks.
It sounds complicated, and it is, which is why HGST's new 10TB drive
has been slow to come to market. Everyone involved wants to make sure
the technology and supporting software works perfectly to avoid
disastrous data loss. But there's no reason to think the technology
won't be ready for desktop PCs and eventually laptops in a few years.
Who needs that cloud anyways?
http://www.storagereview.com/seagate_archive_hdd_review_8tb
"SMR drives are not designed to cope with sustained write behavior"
"We found large sustained backup tasks to take longer than a traditional
PMR HDD, averaging about 30MB/s"
"The SMR drives took much longer for a traditional full backup,
averaging 30MB/s.
However we saw sustained read speeds during a 400GB VM recovery
in excess of 180MB/s, which is really the core metric.
"
It's possible the slightly smaller drives are not affected
like that. Some of the 6TB ones are OK. I saw a review comparing
a few products in the 6TB range, and they had decent sustained
numbers (for home users who care about such things).
This isn't that review, but it'll have to do. It's for a
Seagate 6TB drive, with numbers over 200MB/sec for large
enough block size operations.
http://www.overclockersclub.com/reviews/seagate_enterprise_capacity_6tb_35_hdd_v4_review/4.htm
If you have external disk enclosures, some of the new disks
have a different hole pattern on the bottom, so for drives
that are held into place with screws from the bottom,
only two screws will mate.
I think home users will be staying "one step behind the curve",
to get the best possible secondary storage. SSD for C:,
conventional (non-SMR) secondary storage.
The flying height, the last time I checked, was 3nm. HGST is
experimenting with zero flying height. If you thought your
old drives seemed to have a "wear phenomenon", we're just
getting started. The experimental zero flying height setup
HGST used, lasted one month before the head was ruined. But
they'll figure it out eventually.
I'm crossing my fingers and hoping my current set of
drives last a long time. I'm very happy to not have
30MB/sec writes.
Paul
Why would a home user need 10,000 gigabytes of storage? By the time you
fill it something new would come along or it would die first.
pjp
2015-03-14 12:59:46 UTC
Permalink
Post by Paul
Post by Yousuf Khan
Sadly This 10TB Hard Drive Is Designed For Servers, Not Your Laptop
http://gizmodo.com/sadly-this-10tb-hard-drive-is-designed-for-servers-not-1691245306
Hitachi Global Storage Technologies?aka HGST, aka a subsidiary of
Western Digital?was recently showing off its gigantic new 10TB hard
drive at the Linux Foundation Vault tradeshow in Boston. But
unfortunately you won't be packing 10,000 gigabytes into your laptop
anytime soon because the drive is designed for use in servers, and
mostly because it requires special software to work.
Originally revealed back in September of last year, HGST will finally
be shipping its 10TB SMR HelioSeal HDD sometime in the second quarter
of this year. But the drive will require special updates to an OS like
Linux in order for a server to actually be able to read and write to
it thanks to the radical new storage technologies it employs.
The HelioSeal technology simply means the drive is actually pumped
full of helium to help reduce friction between the read/write heads
and the platter which allows HGST to squeeze more platters inside
since there's less heat to have to deal with. It's the SMR technology
that poses the software problems.
SMR stands for Shingled Magnetic Recording and it basically describes
how data is written to the platters. In a traditional hard drive the
data is written in thin lines with a tiny gap in-between each one to
help minimize corruption. It's similar to how grooves of music are
laid out on a vinyl record. With SMR those data tracks slightly
overlap instead, like waterproof shingles on the roof of a home. There
are no longer any gaps in-between each track which allows more data to
be stored on a single platter, but at the cost of more complicated
software on the OS to properly read, write, and over-write data
without destroying neighboring tracks.
It sounds complicated, and it is, which is why HGST's new 10TB drive
has been slow to come to market. Everyone involved wants to make sure
the technology and supporting software works perfectly to avoid
disastrous data loss. But there's no reason to think the technology
won't be ready for desktop PCs and eventually laptops in a few years.
Who needs that cloud anyways?
http://www.storagereview.com/seagate_archive_hdd_review_8tb
"SMR drives are not designed to cope with sustained write behavior"
"We found large sustained backup tasks to take longer than a traditional
I have almost 10Tb (9+) attached to main pc and almost as much again if
I want to mount network shares. Mix mash of internal and external 1,2 &
3Tb drives. All are used but I try and keep them all over 50% clean to
facilitate if I need to copy from one to another in case one goes
bonkers I have a place to save what I can first before sending back or
replacing.
Wolf K
2015-03-14 14:46:09 UTC
Permalink
On 2015-03-14 8:59 AM, pjp wrote:
[snip stuff about Hitachi 10TB drives]
Post by pjp
I have almost 10Tb (9+) attached to main pc and almost as much again if
I want to mount network shares. Mix mash of internal and external 1,2 &
3Tb drives. All are used but I try and keep them all over 50% clean to
facilitate if I need to copy from one to another in case one goes
bonkers I have a place to save what I can first before sending back or
replacing.
Altogether we have 7.5TB external storage plus 2.5 internal storage on
our machines. Not counting USB drives or SD cards, which add up to about
another 1/2 TB (We do not reuse SD cards in our cameras when full, we
store them as primary backup.).

Have a good day,
--
Best,
Wolf K
kirkwood40.blogspot.ca
Rod Speed
2015-03-14 19:50:34 UTC
Permalink
Post by pjp
Post by Paul
Post by Yousuf Khan
Sadly This 10TB Hard Drive Is Designed For Servers, Not Your Laptop
http://gizmodo.com/sadly-this-10tb-hard-drive-is-designed-for-servers-not-1691245306
Hitachi Global Storage Technologies?aka HGST, aka a subsidiary of
Western Digital?was recently showing off its gigantic new 10TB hard
drive at the Linux Foundation Vault tradeshow in Boston. But
unfortunately you won't be packing 10,000 gigabytes into your laptop
anytime soon because the drive is designed for use in servers, and
mostly because it requires special software to work.
Originally revealed back in September of last year, HGST will finally
be shipping its 10TB SMR HelioSeal HDD sometime in the second quarter
of this year. But the drive will require special updates to an OS like
Linux in order for a server to actually be able to read and write to
it thanks to the radical new storage technologies it employs.
The HelioSeal technology simply means the drive is actually pumped
full of helium to help reduce friction between the read/write heads
and the platter which allows HGST to squeeze more platters inside
since there's less heat to have to deal with. It's the SMR technology
that poses the software problems.
SMR stands for Shingled Magnetic Recording and it basically describes
how data is written to the platters. In a traditional hard drive the
data is written in thin lines with a tiny gap in-between each one to
help minimize corruption. It's similar to how grooves of music are
laid out on a vinyl record. With SMR those data tracks slightly
overlap instead, like waterproof shingles on the roof of a home. There
are no longer any gaps in-between each track which allows more data to
be stored on a single platter, but at the cost of more complicated
software on the OS to properly read, write, and over-write data
without destroying neighboring tracks.
It sounds complicated, and it is, which is why HGST's new 10TB drive
has been slow to come to market. Everyone involved wants to make sure
the technology and supporting software works perfectly to avoid
disastrous data loss. But there's no reason to think the technology
won't be ready for desktop PCs and eventually laptops in a few years.
Who needs that cloud anyways?
http://www.storagereview.com/seagate_archive_hdd_review_8tb
"SMR drives are not designed to cope with sustained write behavior"
"We found large sustained backup tasks to take longer than a traditional
I have almost 10Tb (9+) attached to main pc
I actually have more than 10TB myself.
Post by pjp
and almost as much again if I want to mount network shares.
Mix mash of internal and external 1,2 & 3Tb drives.
Me too.
Post by pjp
All are used
Me too.
Post by pjp
but I try and keep them all over 50% clean
I don't do anything like that.
Post by pjp
to facilitate if I need to copy from one to another in
case one goes bonkers I have a place to save what
I can first before sending back or replacing.
I'd just buy another in that situation, they are so cheap.

I don't even bother to carefully edit the PVR files that have
had part of them watched, too much farting around, drives
are now so cheap I don't bother.
(PeteCresswell)
2015-03-14 16:37:51 UTC
Permalink
Post by Drew
Why would a home user need 10,000 gigabytes of storage? By the time you
fill it something new would come along or it would die first.
Movies and recorded TV.

My movies are on a 10-TB NAS box (six 3-TB drives w/redundancy so that 2
of them can fail without losing data) and my recorded TV is on 3 local
2-TB drives attached to my 24-7 PC.

The reasons for the local drives are that I don't want to spend more
money on a second NAS box, and I consider recorded TV to be expendable
so a failure would not be a big deal.
--
Pete Cresswell
Rod Speed
2015-03-14 19:46:17 UTC
Permalink
Post by Drew
Post by Paul
Post by Yousuf Khan
Sadly This 10TB Hard Drive Is Designed For Servers, Not Your Laptop
http://gizmodo.com/sadly-this-10tb-hard-drive-is-designed-for-servers-not-1691245306
Hitachi Global Storage Technologies—aka HGST, aka a subsidiary of
Western Digital—was recently showing off its gigantic new 10TB hard
drive at the Linux Foundation Vault tradeshow in Boston. But
unfortunately you won't be packing 10,000 gigabytes into your laptop
anytime soon because the drive is designed for use in servers, and
mostly because it requires special software to work.
Originally revealed back in September of last year, HGST will finally
be shipping its 10TB SMR HelioSeal HDD sometime in the second quarter
of this year. But the drive will require special updates to an OS like
Linux in order for a server to actually be able to read and write to
it thanks to the radical new storage technologies it employs.
The HelioSeal technology simply means the drive is actually pumped
full of helium to help reduce friction between the read/write heads
and the platter which allows HGST to squeeze more platters inside
since there's less heat to have to deal with. It's the SMR technology
that poses the software problems.
SMR stands for Shingled Magnetic Recording and it basically describes
how data is written to the platters. In a traditional hard drive the
data is written in thin lines with a tiny gap in-between each one to
help minimize corruption. It's similar to how grooves of music are
laid out on a vinyl record. With SMR those data tracks slightly
overlap instead, like waterproof shingles on the roof of a home. There
are no longer any gaps in-between each track which allows more data to
be stored on a single platter, but at the cost of more complicated
software on the OS to properly read, write, and over-write data
without destroying neighboring tracks.
It sounds complicated, and it is, which is why HGST's new 10TB drive
has been slow to come to market. Everyone involved wants to make sure
the technology and supporting software works perfectly to avoid
disastrous data loss. But there's no reason to think the technology
won't be ready for desktop PCs and eventually laptops in a few years.
Who needs that cloud anyways?
http://www.storagereview.com/seagate_archive_hdd_review_8tb
"SMR drives are not designed to cope with sustained write behavior"
"We found large sustained backup tasks to take longer than a traditional
PMR HDD, averaging about 30MB/s"
"The SMR drives took much longer for a traditional full backup,
averaging 30MB/s.
However we saw sustained read speeds during a 400GB VM recovery
in excess of 180MB/s, which is really the core metric.
"
It's possible the slightly smaller drives are not affected
like that. Some of the 6TB ones are OK. I saw a review comparing
a few products in the 6TB range, and they had decent sustained
numbers (for home users who care about such things).
This isn't that review, but it'll have to do. It's for a
Seagate 6TB drive, with numbers over 200MB/sec for large
enough block size operations.
http://www.overclockersclub.com/reviews/seagate_enterprise_capacity_6tb_35_hdd_v4_review/4.htm
If you have external disk enclosures, some of the new disks
have a different hole pattern on the bottom, so for drives
that are held into place with screws from the bottom,
only two screws will mate.
I think home users will be staying "one step behind the curve",
to get the best possible secondary storage. SSD for C:,
conventional (non-SMR) secondary storage.
The flying height, the last time I checked, was 3nm. HGST is
experimenting with zero flying height. If you thought your
old drives seemed to have a "wear phenomenon", we're just
getting started. The experimental zero flying height setup
HGST used, lasted one month before the head was ruined. But
they'll figure it out eventually.
I'm crossing my fingers and hoping my current set of
drives last a long time. I'm very happy to not have
30MB/sec writes.
Why would a home user need 10,000 gigabytes of storage?
I've already got that, just not in a single drive, mostly for
the PVR overflow that I haven't got around to watching yet.
Post by Drew
By the time you fill it something new would come along or it would die
first.
That isn't what happened with the 10TB+ I already have.
Gene E. Bloch
2015-03-14 23:11:54 UTC
Permalink
On Sat, 14 Mar 2015 05:39:04 -0700, Drew wrote:

This was going to be my reply:

<MY PLAN>
Post by Drew
Why would a home user need 10,000 gigabytes of storage? By the time you
fill it something new would come along or it
or the user
Post by Drew
would die first.
</MY PLAN>

but before doing it I read the other replies.

Obviously I didn't realize what some users' needs are.
--
Gene E. Bloch (Stumbling Bloch)
Charlie
2015-03-15 08:38:39 UTC
Permalink
Post by Gene E. Bloch
<MY PLAN>
Post by Drew
Why would a home user need 10,000 gigabytes of storage? By the time you
fill it something new would come along or it
or the user
Post by Drew
would die first.
</MY PLAN>
but before doing it I read the other replies.
Obviously I didn't realize what some users' needs are.
Just think of what a backup would take!
k***@zzz.com
2015-03-15 13:37:23 UTC
Permalink
Post by Charlie
Post by Gene E. Bloch
<MY PLAN>
Post by Drew
Why would a home user need 10,000 gigabytes of storage? By the time you
fill it something new would come along or it
or the user
Post by Drew
would die first.
</MY PLAN>
but before doing it I read the other replies.
Obviously I didn't realize what some users' needs are.
Just think of what a backup would take!
Another 10T disk? Unless I messed up the arithmetic, that's about a
day or two to do a complete backup.
Rod Speed
2015-03-15 19:59:18 UTC
Permalink
Post by k***@zzz.com
Post by Charlie
Post by Gene E. Bloch
<MY PLAN>
Post by Drew
Why would a home user need 10,000 gigabytes of storage? By the time you
fill it something new would come along or it
or the user
Post by Drew
would die first.
</MY PLAN>
but before doing it I read the other replies.
Obviously I didn't realize what some users' needs are.
Just think of what a backup would take!
Another 10T disk? Unless I messed up the arithmetic, that's about a
day or two to do a complete backup.
But you wouldn't do backup like that, just write new
PVR files to both drives when a new one shows up.
Char Jackson
2015-03-15 14:39:53 UTC
Permalink
Post by Charlie
Post by Gene E. Bloch
<MY PLAN>
Post by Drew
Why would a home user need 10,000 gigabytes of storage? By the time you
fill it something new would come along or it
or the user
Post by Drew
would die first.
</MY PLAN>
but before doing it I read the other replies.
Obviously I didn't realize what some users' needs are.
Just think of what a backup would take!
I have a total of about 50TB here, consisting of two volumes, (using Drive
Bender), and made the decision long ago that I wouldn't be backing up all of
it under any circumstances. Instead, you decide what needs to be backed up
and do it selectively, and the rest either gets protected with parity so
that you can lose a drive or two and still recover, or you simply decide to
ride bareback and treat the data as expendable or replaceable.
--
Char Jackson
Paul
2015-03-15 15:58:32 UTC
Permalink
Post by Char Jackson
Post by Charlie
Post by Gene E. Bloch
<MY PLAN>
Post by Drew
Why would a home user need 10,000 gigabytes of storage? By the time you
fill it something new would come along or it
or the user
Post by Drew
would die first.
</MY PLAN>
but before doing it I read the other replies.
Obviously I didn't realize what some users' needs are.
Just think of what a backup would take!
I have a total of about 50TB here, consisting of two volumes, (using Drive
Bender), and made the decision long ago that I wouldn't be backing up all of
it under any circumstances. Instead, you decide what needs to be backed up
and do it selectively, and the rest either gets protected with parity so
that you can lose a drive or two and still recover, or you simply decide to
ride bareback and treat the data as expendable or replaceable.
How unhappy would you be, if you lost the entire array ?

Common mode failures do happen.

All it takes is a power supply failure, the 12V rail rising
to 15V for around 30 seconds, and it's all over for your array.

*******

One problem I see with that 10TB drive, is it's not going
to fit into the typical IT guys "backup window value".
You'd be surprised how important that is to some people.

I'm also surprised there's no "reach" program at Seagate
or WD, to change the basics of hard drive design. And
crank up the bandwidth. If you're going to make a 10TB
drive, it should have 500MB/sec bandwidth. They should
at least have enough heads, to write the entire shingle
in one pass.

Paul
Char Jackson
2015-03-15 18:15:54 UTC
Permalink
Post by Paul
Post by Char Jackson
Post by Charlie
Post by Gene E. Bloch
<MY PLAN>
Post by Drew
Why would a home user need 10,000 gigabytes of storage? By the time you
fill it something new would come along or it
or the user
Post by Drew
would die first.
</MY PLAN>
but before doing it I read the other replies.
Obviously I didn't realize what some users' needs are.
Just think of what a backup would take!
I have a total of about 50TB here, consisting of two volumes, (using Drive
Bender), and made the decision long ago that I wouldn't be backing up all of
it under any circumstances. Instead, you decide what needs to be backed up
and do it selectively, and the rest either gets protected with parity so
that you can lose a drive or two and still recover, or you simply decide to
ride bareback and treat the data as expendable or replaceable.
How unhappy would you be, if you lost the entire array ?
Common mode failures do happen.
I'd be unhappy, but not devastated. The financial loss would be the biggest
thing if all of the drives got toasted; i.e. drive replacement cost. The
important data is backed up, and the rest can be replaced. Having said that,
common mode failures are a pretty rare event these days, so I accept the
risk (as I see it).
--
Char Jackson
k***@zzz.com
2015-03-16 00:12:34 UTC
Permalink
Post by Paul
Post by Char Jackson
Post by Charlie
Post by Gene E. Bloch
<MY PLAN>
Post by Drew
Why would a home user need 10,000 gigabytes of storage? By the time you
fill it something new would come along or it
or the user
Post by Drew
would die first.
</MY PLAN>
but before doing it I read the other replies.
Obviously I didn't realize what some users' needs are.
Just think of what a backup would take!
I have a total of about 50TB here, consisting of two volumes, (using Drive
Bender), and made the decision long ago that I wouldn't be backing up all of
it under any circumstances. Instead, you decide what needs to be backed up
and do it selectively, and the rest either gets protected with parity so
that you can lose a drive or two and still recover, or you simply decide to
ride bareback and treat the data as expendable or replaceable.
How unhappy would you be, if you lost the entire array ?
Common mode failures do happen.
All it takes is a power supply failure, the 12V rail rising
to 15V for around 30 seconds, and it's all over for your array.
Well, your house could burn down, too (and much more likely). Just
unplug an array element and take it to another site. Rotate that
drive though the array with another, or three.
Post by Paul
*******
One problem I see with that 10TB drive, is it's not going
to fit into the typical IT guys "backup window value".
You'd be surprised how important that is to some people.
I'm also surprised there's no "reach" program at Seagate
or WD, to change the basics of hard drive design. And
crank up the bandwidth. If you're going to make a 10TB
drive, it should have 500MB/sec bandwidth. They should
at least have enough heads, to write the entire shingle
in one pass.
Rod Speed
2015-03-15 19:57:31 UTC
Permalink
Post by Charlie
Post by Gene E. Bloch
<MY PLAN>
Post by Drew
Why would a home user need 10,000 gigabytes of storage? By the time you
fill it something new would come along or it
or the user
Post by Drew
would die first.
</MY PLAN>
but before doing it I read the other replies.
Obviously I didn't realize what some users' needs are.
Just think of what a backup would take!
I don’t bother to back it all up, essentially because its
spread over multiple physical drives many of which aren't
even plugged in so you'd only lose part of the total.
Yousuf Khan
2015-03-15 12:16:39 UTC
Permalink
Post by Drew
Why would a home user need 10,000 gigabytes of storage? By the time you
fill it something new would come along or it would die first.
I'm pretty close to that amount if I count all internal drives and
external USB drives too.

Yousuf Khan
Mr. Man-wai Chang
2015-03-14 12:40:33 UTC
Permalink
Post by Yousuf Khan
Sadly This 10TB Hard Drive Is Designed For Servers, Not Your Laptop
http://gizmodo.com/sadly-this-10tb-hard-drive-is-designed-for-servers-not-1691245306
The 6TB is available to the consumer market!
Post by Yousuf Khan
It sounds complicated, and it is, which is why HGST's new 10TB drive has
been slow to come to market. Everyone involved wants to make sure the
technology and supporting software works perfectly to avoid disastrous
data loss. But there's no reason to think the technology won't be ready
for desktop PCs and eventually laptops in a few years. Who needs that
cloud anyways?
I would worry about these drives' reliability! We are talking about
personal data here! :)
--
@~@ Remain silent. Nothing from soldiers and magicians is real!
/ v \ Simplicity is Beauty! May the Force and farces be with you!
/( _ )\ (Fedora release 21) Linux 3.18.8-201.fc21.i686+PAE
^ ^ 20:33:02 up 3 days 1:35 0 users load average: 0.01 0.04 0.05
不借貸! 不詐騙! 不援交! 不打交! 不打劫! 不自殺! 請考慮綜援 (CSSA):
http://www.swd.gov.hk/tc/index/site_pubsvc/page_socsecu/sub_addressesa
Mike Tomlinson
2015-03-14 12:49:05 UTC
Permalink
Post by Yousuf Khan
but at the cost of more complicated software
on the OS to properly read, write, and over-write data without
destroying neighboring tracks.
This is bullshit. The OS has nothing to do with it, the drive firmware
does it and this process is invisible to the OS.

(Not having a go at you Yousuf, I know you're quoting an article posted
elsewhere)

The reason these are being aimed at the enterprise market is that SMR
(shingled magnetic recording) suffers from very slow write speeds due to
the overlapping tracks, making this sort of drive more suited to long-
term archival storage (e.g. in the cloud) than in a desktop or server
machine.
--
:: je suis Charlie :: yo soy Charlie :: ik ben Charlie ::
Yousuf Khan
2015-03-15 12:25:53 UTC
Permalink
Post by Mike Tomlinson
Post by Yousuf Khan
but at the cost of more complicated software
on the OS to properly read, write, and over-write data without
destroying neighboring tracks.
This is bullshit. The OS has nothing to do with it, the drive firmware
does it and this process is invisible to the OS.
Maybe the special write procedures for these drives can't be fully
handled by the drive firmware, and so it needs an assist by the OS? It
wouldn't surprise me if the write timing operation is so complex that
it's best handled by a subroutine that can only run on the host machine.
The host machine would have better understanding of the high-level file
system data structures, which the firmware wouldn't understand. Perhaps
it's best to not consider these hard drives but something between a hard
drive and a tape drive?

Yousuf Khan
Wolf K
2015-03-14 14:32:52 UTC
Permalink
Post by Yousuf Khan
Sadly This 10TB Hard Drive Is Designed For Servers, Not Your Laptop
http://gizmodo.com/sadly-this-10tb-hard-drive-is-designed-for-servers-not-1691245306
[...] but at the cost of more complicated software
Post by Yousuf Khan
on the OS to properly read, write, and over-write data without
destroying neighboring tracks.
It sounds complicated, and it is, which is why HGST's new 10TB drive has
been slow to come to market. Everyone involved wants to make sure the
technology and supporting software works perfectly to avoid disastrous
data loss. But there's no reason to think the technology won't be ready
for desktop PCs and eventually laptops in a few years. Who needs that
cloud anyways?
Once again I want to make the point that a storage device need not be
operated by the operating system. Think of that drive as a cloud. Go
from there.

There is no reason that a storage device can't be built to use standard
protocols to exchange data with whatever device(s) connect to it. The
devices only need to be able to execute those protocols. The device
doesn't even need to be a computer. It can be any electronic gizmo with
hard-coded firmware. Those devices already exist, eg security cameras
that can be accessed from your smartphone. Why can't a 10TB HDD be
configured the same way?

HTH
--
Best,
Wolf K
kirkwood40.blogspot.ca
Paul
2015-03-14 16:21:56 UTC
Permalink
Post by Wolf K
Why can't a 10TB HDD be
configured the same way?
One of the commenters at the end of that Gizmodo
article, points out the difference.

http://gizmodo.com/sadly-this-10tb-hard-drive-is-designed-for-servers-not-1691245306
Paul
2015-03-14 16:24:02 UTC
Permalink
Post by Paul
Why can't a 10TB HDD be configured the same way?
One of the commenters at the end of that Gizmodo
article, points out the difference.
http://gizmodo.com/sadly-this-10tb-hard-drive-is-designed-for-servers-not-1691245306
"There are three ways to implement SMR data management.

HGST uses host managed, which is perfect as they're targeting
hyperscale with these drives.

Seagate uses drive managed, which works with any OS/file system.
Seagate is going for more wide reaching market penetration where
HGST is focused on top 5-10 guys only.

Incidentally, that large hyperscale market doesn't have a problem
with modifying their stack to accommodate SMR.
"

So it's a conscious design decision, to "let the user"
write the handler. In the case of the HGST product.

Paul
B00ze/Empire
2015-03-18 01:25:39 UTC
Permalink
Post by Paul
Post by Paul
Why can't a 10TB HDD be configured the same way?
One of the commenters at the end of that Gizmodo
article, points out the difference.
http://gizmodo.com/sadly-this-10tb-hard-drive-is-designed-for-servers-not-1691245306
"There are three ways to implement SMR data management.
HGST uses host managed, which is perfect as they're targeting
hyperscale with these drives.
Seagate uses drive managed, which works with any OS/file system.
Seagate is going for more wide reaching market penetration where
HGST is focused on top 5-10 guys only.
Incidentally, that large hyperscale market doesn't have a problem
with modifying their stack to accommodate SMR.
"
So it's a conscious design decision, to "let the user"
write the handler. In the case of the HGST product.
Paul
Darn, I replied too fast, here's my answer... ;-)
--
! _\|/_ Sylvain / ***@hotmail.com
! (o o) Member-+-David-Suzuki-Foundation/EFF/Planetary-Society-+-
oO-( )-Oo "Apple" (c) Copyright 1767, Sir Isaac Newton.
Paul
2015-03-18 02:44:17 UTC
Permalink
Post by B00ze/Empire
Post by Paul
Post by Paul
Why can't a 10TB HDD be configured the same way?
One of the commenters at the end of that Gizmodo
article, points out the difference.
http://gizmodo.com/sadly-this-10tb-hard-drive-is-designed-for-servers-not-1691245306
"There are three ways to implement SMR data management.
HGST uses host managed, which is perfect as they're targeting
hyperscale with these drives.
Seagate uses drive managed, which works with any OS/file system.
Seagate is going for more wide reaching market penetration where
HGST is focused on top 5-10 guys only.
Incidentally, that large hyperscale market doesn't have a problem
with modifying their stack to accommodate SMR.
"
So it's a conscious design decision, to "let the user"
write the handler. In the case of the HGST product.
Paul
Darn, I replied too fast, here's my answer... ;-)
It's kinda unbelievable. Obviously HGST want to sell
a limited number of those. Maybe they'll analyze the
Seagate drives and figure out the best policies :-)

Paul

B00ze/Empire
2015-03-18 01:20:17 UTC
Permalink
There are no longer any gaps in-between each track which allows more
data to be stored on a single platter, but at the cost of more
complicated software on the OS to properly read, write, and over-write
data without destroying neighboring tracks.
That sounds strange; firmware should handle writes transparently, I
wonder what they mean...
--
! _\|/_ Sylvain / ***@hotmail.com
! (o o) Member-+-David-Suzuki-Foundation/EFF/Planetary-Society-+-
oO-( )-Oo Just got a new computer for my wife. What a great trade!
Loading...