Why not constant linear velocity floppies?












18















The outer tracks of a disk are longer than the inner tracks, and could therefore potentially hold more data. Constant angular velocity puts the same number of bits on every track, which wastes much of the potential capacity of the disk. A solution to this problem is constant linear velocity (CLV) which varies the motor speed such that the head can spend more time on the outer tracks and therefore record more data.



But as the linked article indicates, while this is used on optical disks, it has generally not been used on floppy disks. Why not? They would certainly have gained great benefit from more capacity.



It would have added more complexity to the drive controller, but intuitively, this seems unlikely to add significantly to the total cost.



Wikipedia says 'seek performance would be greatly affected during random access by the requirement to continually modulate the disk's rotation speed to be appropriate for the read head's position'. Why? And how much slowdown? Intuitively I would rather the drive spend an extra tenth of a second seeking, than require me to spend an extra ten seconds manually swapping disks because one disk didn't have enough capacity.



Is there another consideration I am missing?










share|improve this question




















  • 7





    There were disks that used at least speed zones .. e.g. for the commodore 1541 dirve and for old macintoshs ...

    – Felix Palmen
    Jan 23 at 12:11






  • 10





    Commodore, at least, has four speed zones but avoids the slow seek problem by rotating the disk at a constant velocity. It just changes its data rate.

    – Tommy
    Jan 23 at 12:26






  • 1





    what @Tommy said, so it's not exactly the same. The macintosh format IIRC uses different rotation speeds for the zones, but I'm still looking for a source...

    – Felix Palmen
    Jan 23 at 12:28






  • 2





    @Wilson found it here: support.apple.com/kb/TA39910?locale=en_US&viewlocale=en_US -- so some apple drives had 5 speed zones and indeed changed rotation speed. the commodore 1541 had 4 speed zones, but changed bitrate with a constant rotation speed.

    – Felix Palmen
    Jan 23 at 12:34








  • 1





    Wouldn't have worked with hard-sectored disks. But those were not very common anyhow.

    – tofro
    Jan 23 at 13:27
















18















The outer tracks of a disk are longer than the inner tracks, and could therefore potentially hold more data. Constant angular velocity puts the same number of bits on every track, which wastes much of the potential capacity of the disk. A solution to this problem is constant linear velocity (CLV) which varies the motor speed such that the head can spend more time on the outer tracks and therefore record more data.



But as the linked article indicates, while this is used on optical disks, it has generally not been used on floppy disks. Why not? They would certainly have gained great benefit from more capacity.



It would have added more complexity to the drive controller, but intuitively, this seems unlikely to add significantly to the total cost.



Wikipedia says 'seek performance would be greatly affected during random access by the requirement to continually modulate the disk's rotation speed to be appropriate for the read head's position'. Why? And how much slowdown? Intuitively I would rather the drive spend an extra tenth of a second seeking, than require me to spend an extra ten seconds manually swapping disks because one disk didn't have enough capacity.



Is there another consideration I am missing?










share|improve this question




















  • 7





    There were disks that used at least speed zones .. e.g. for the commodore 1541 dirve and for old macintoshs ...

    – Felix Palmen
    Jan 23 at 12:11






  • 10





    Commodore, at least, has four speed zones but avoids the slow seek problem by rotating the disk at a constant velocity. It just changes its data rate.

    – Tommy
    Jan 23 at 12:26






  • 1





    what @Tommy said, so it's not exactly the same. The macintosh format IIRC uses different rotation speeds for the zones, but I'm still looking for a source...

    – Felix Palmen
    Jan 23 at 12:28






  • 2





    @Wilson found it here: support.apple.com/kb/TA39910?locale=en_US&viewlocale=en_US -- so some apple drives had 5 speed zones and indeed changed rotation speed. the commodore 1541 had 4 speed zones, but changed bitrate with a constant rotation speed.

    – Felix Palmen
    Jan 23 at 12:34








  • 1





    Wouldn't have worked with hard-sectored disks. But those were not very common anyhow.

    – tofro
    Jan 23 at 13:27














18












18








18


2






The outer tracks of a disk are longer than the inner tracks, and could therefore potentially hold more data. Constant angular velocity puts the same number of bits on every track, which wastes much of the potential capacity of the disk. A solution to this problem is constant linear velocity (CLV) which varies the motor speed such that the head can spend more time on the outer tracks and therefore record more data.



But as the linked article indicates, while this is used on optical disks, it has generally not been used on floppy disks. Why not? They would certainly have gained great benefit from more capacity.



It would have added more complexity to the drive controller, but intuitively, this seems unlikely to add significantly to the total cost.



Wikipedia says 'seek performance would be greatly affected during random access by the requirement to continually modulate the disk's rotation speed to be appropriate for the read head's position'. Why? And how much slowdown? Intuitively I would rather the drive spend an extra tenth of a second seeking, than require me to spend an extra ten seconds manually swapping disks because one disk didn't have enough capacity.



Is there another consideration I am missing?










share|improve this question
















The outer tracks of a disk are longer than the inner tracks, and could therefore potentially hold more data. Constant angular velocity puts the same number of bits on every track, which wastes much of the potential capacity of the disk. A solution to this problem is constant linear velocity (CLV) which varies the motor speed such that the head can spend more time on the outer tracks and therefore record more data.



But as the linked article indicates, while this is used on optical disks, it has generally not been used on floppy disks. Why not? They would certainly have gained great benefit from more capacity.



It would have added more complexity to the drive controller, but intuitively, this seems unlikely to add significantly to the total cost.



Wikipedia says 'seek performance would be greatly affected during random access by the requirement to continually modulate the disk's rotation speed to be appropriate for the read head's position'. Why? And how much slowdown? Intuitively I would rather the drive spend an extra tenth of a second seeking, than require me to spend an extra ten seconds manually swapping disks because one disk didn't have enough capacity.



Is there another consideration I am missing?







hardware floppy-disk






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jan 23 at 17:57









wizzwizz4

8,709641109




8,709641109










asked Jan 23 at 11:41









rwallacerwallace

9,564448141




9,564448141








  • 7





    There were disks that used at least speed zones .. e.g. for the commodore 1541 dirve and for old macintoshs ...

    – Felix Palmen
    Jan 23 at 12:11






  • 10





    Commodore, at least, has four speed zones but avoids the slow seek problem by rotating the disk at a constant velocity. It just changes its data rate.

    – Tommy
    Jan 23 at 12:26






  • 1





    what @Tommy said, so it's not exactly the same. The macintosh format IIRC uses different rotation speeds for the zones, but I'm still looking for a source...

    – Felix Palmen
    Jan 23 at 12:28






  • 2





    @Wilson found it here: support.apple.com/kb/TA39910?locale=en_US&viewlocale=en_US -- so some apple drives had 5 speed zones and indeed changed rotation speed. the commodore 1541 had 4 speed zones, but changed bitrate with a constant rotation speed.

    – Felix Palmen
    Jan 23 at 12:34








  • 1





    Wouldn't have worked with hard-sectored disks. But those were not very common anyhow.

    – tofro
    Jan 23 at 13:27














  • 7





    There were disks that used at least speed zones .. e.g. for the commodore 1541 dirve and for old macintoshs ...

    – Felix Palmen
    Jan 23 at 12:11






  • 10





    Commodore, at least, has four speed zones but avoids the slow seek problem by rotating the disk at a constant velocity. It just changes its data rate.

    – Tommy
    Jan 23 at 12:26






  • 1





    what @Tommy said, so it's not exactly the same. The macintosh format IIRC uses different rotation speeds for the zones, but I'm still looking for a source...

    – Felix Palmen
    Jan 23 at 12:28






  • 2





    @Wilson found it here: support.apple.com/kb/TA39910?locale=en_US&viewlocale=en_US -- so some apple drives had 5 speed zones and indeed changed rotation speed. the commodore 1541 had 4 speed zones, but changed bitrate with a constant rotation speed.

    – Felix Palmen
    Jan 23 at 12:34








  • 1





    Wouldn't have worked with hard-sectored disks. But those were not very common anyhow.

    – tofro
    Jan 23 at 13:27








7




7





There were disks that used at least speed zones .. e.g. for the commodore 1541 dirve and for old macintoshs ...

– Felix Palmen
Jan 23 at 12:11





There were disks that used at least speed zones .. e.g. for the commodore 1541 dirve and for old macintoshs ...

– Felix Palmen
Jan 23 at 12:11




10




10





Commodore, at least, has four speed zones but avoids the slow seek problem by rotating the disk at a constant velocity. It just changes its data rate.

– Tommy
Jan 23 at 12:26





Commodore, at least, has four speed zones but avoids the slow seek problem by rotating the disk at a constant velocity. It just changes its data rate.

– Tommy
Jan 23 at 12:26




1




1





what @Tommy said, so it's not exactly the same. The macintosh format IIRC uses different rotation speeds for the zones, but I'm still looking for a source...

– Felix Palmen
Jan 23 at 12:28





what @Tommy said, so it's not exactly the same. The macintosh format IIRC uses different rotation speeds for the zones, but I'm still looking for a source...

– Felix Palmen
Jan 23 at 12:28




2




2





@Wilson found it here: support.apple.com/kb/TA39910?locale=en_US&viewlocale=en_US -- so some apple drives had 5 speed zones and indeed changed rotation speed. the commodore 1541 had 4 speed zones, but changed bitrate with a constant rotation speed.

– Felix Palmen
Jan 23 at 12:34







@Wilson found it here: support.apple.com/kb/TA39910?locale=en_US&viewlocale=en_US -- so some apple drives had 5 speed zones and indeed changed rotation speed. the commodore 1541 had 4 speed zones, but changed bitrate with a constant rotation speed.

– Felix Palmen
Jan 23 at 12:34






1




1





Wouldn't have worked with hard-sectored disks. But those were not very common anyhow.

– tofro
Jan 23 at 13:27





Wouldn't have worked with hard-sectored disks. But those were not very common anyhow.

– tofro
Jan 23 at 13:27










7 Answers
7






active

oldest

votes


















2














Answering the "why not", as the "actually, there was" has already been covered: Essentially, it's because of the generally small number of sectors per track on a floppy disk, or in other words the much more limited amount of data per revolution, and the rather smaller variation in track length per revolution. Also, as has been said above, the fact that the tracks are concentric, thus with a whole number of sectors per revolution, rather than being a continuous spiral where each sector can start and end at any arbitrary angle vs a fixed point of reference.



This both makes it less straightforward to engineer - you can't simply make a smooth variation of motor speed vs head position, but have to separate the disk into multiple speed zones (the smallest/largest number of tracks you may have is 35~84, and number of sectors about 8~21, with a particular disk size and coating formulation normalising around a fairly tight range, so each zone needs to be at the very least three tracks wide and could well spread over ten or even twenty tracks), each demanding just as tight motor control as a simpler single-speed mechanism, and for the hardware (not just software) to maintain absolute certainty over which track the head is currently sitting over - and limits the potential benefit of the technique.



For example, the Apple drives pulled about an extra 11% out of each disk vs the MSDOS standard, as they had to reduce the sector counts on the inner tracks to account for less than rigid motor speed control across the various zones; the Amiga, and common custom formats on the Atari ST, as well as Microsoft's own DMF system-disk format, achieved similar or better capacity on ordinary disks, at least for reading (and for writing with all but the sloppiest, sub-average rpm drives) with single-speed/CAV recording just by increasing the number of sectors on all tracks (e.g. 10 sectors instead of 9) and tightening up the inter-sector and end-of-track timings.



(The sector counts have to be varied by changing the motor speed rather than, as with high speed CDRW/DVDRW/BDRW drives or hard drives, holding the motor speed steady and varying the data rate because the floppy controller chips can generally only operate at one or two fixed rates, with their clock input being - at least in older machines - either locked to the system clock or a separate crystal that's the fastest one in the entire machine, so can't be finely subdivided to a variety of slightly-different rates (it's either 1:1, or 1/2...), nor multiply up from a lower frequency using a PLL. Optical drives and hard drives take their clock from the pre-hard-formatted media itself, but soft-formatted floppies have to use a reference within the computer itself)



The actual recorded area of a floppy disk is quite obvious - it's a little narrower than the "window" of a 5.25 or 8 inch envelope, or that opens up on a 3.5 (or 3.0, 2.5...) inch type. Its boundaries are a good way in both from the physical edge of the disc proper, and the hub, and their radii don't vary a huge amount in comparison to that of the middle track. If we go by Apple's example, it can be assumed that 3.5 inch disks exhibit about a 1.5:1 variation between the innermost and outermost tracks, and perhaps even less (about 1.4:1) on 5.25 inchers. The notional amount of wastage with CAV recording is quite low - if you push the limits, you might get 20 to 25% extra, but realistically no more than 15%, which wasn't worth the bother (and considerable extra cost) to most manufacturers who didn't have a hardware design savant sitting in one of the founders' seats like Apple (even Commodore, who had their own IC fabs and other first-party hardware factories didn't much bother with the idea).



On a typical optical disc, however, almost the entire visible surface area is available for data storage, from a few millimetres out from the transparent hub through to a millimetre or so from the outer edge. The speed of an envelope-pushing 81 minute music CD varies by almost a full 2.5x as it plays the audio data out at a steady speed with no meaningful buffering, implying an outer radius nearly 2.5x that of the innermost, and DVDRs written within safe limits (avoiding the last millimetre or so where error rates skyrocket) show a 2.4x speed variation when running in variable data-rate CAV mode. Therefore if you were to operate in pure CAV, fixed sector count per revolution mode with those, you would lose a significant amount of the total capacity, easily a third or more, which would mean the difference between storing 75 minutes, or just 50 minutes. This loss can actually be seen with CAV Laserdiscs (tuned for steady freezeframe, or storing thousands of still photos rather than the maximum amount of analogue video... which is a little strange because their analogue nature would allow cramming in more sectors to the inner tracks at the expense of some horizontal resolution) which have a similar inner/outer radius ratio to CDs and show a noticeably lower runtime, and in the early "PacketCD" standards for floppy-like addressing of CDRWs (with fixed numbers of sectors per revolution, Z-CAV speed control, and a larger than normal gap between sectors, all compensating for the difficulty of accurately rewriting individual sectors in a continual-spiral disc format never intended for anything other than one-time writing of single large sessions consisting of many thousands of sectors) which saw the recordable capacity of a 650+ MB CDRW fall to barely more than 500MB, and a 700MB disc to about 530MB.



The latter examples are also an answer to why we don't use continuous spirals for floppies either; it's just too complicated, from an engineering perspective. The finesse of control exerted in CD transports in terms of head positioning was simply exquisite by the terms of the early 80s and easily counted for as much of the thousand-plus dollar selling price of the first players as the actual laser diodes, the high-data-rate decoder circuitry and ultra high fidelity analogue output stage. A compact disc spins at about the same average speed as a floppy disc (whilst delivering 10x the data rate of even the fastest floppies, and more like 80x that of a turn-of-the-80s model), but moves the equivalent of one track width per rotation (as the read-out is nonstop, unlike most floppies, which usually need at least 3 revolutions to read a full, single-sided track of data and potentially as many as 21)... and can carry on doing that for anything up to 80 minutes (whereas most floppies can be fully read in under 5 minutes, sometimes less than 1 minute). It might average about 375rpm over those 80 minutes, so the head needs to be able to seek between at least 30,000 individual, microscopic tracks (across a start-to-end width of maybe 2 inches max), and that's if we assume the laser head's groove-following abilities have enough swing to cover half a track width one way or the other instead of the head sled having to step 12.5 or so half-tracks per second, or even 25 quarter-tracks. A floppy drive, as stated, only needing the ability to step somewhat coarsely between 35 to 84 tracks over a slightly narrower sweep, which is a large enough distance that the mechanism can be clearly seen moving from one track to the next.



And, of course, to maintain continual tracking (but still with random-access abilities), the RW head mechanism would either have to be stepped whole rather more finely (say, a tiny tick between each sector), or be equipped with a similar track-following servo mechanism that electromagnetically (problematic for magnetic media...) swings the actual coils side to side within the frame...



Considering how much even mundane floppy drives cost during the era when that sort of advance would actually have been useful, the necessary engineering upgrades to enable it would have been absolutely prohibitive. Maybe the sort of thing IBM would have indulged in for the drives attached to their mainframes, but unlikely to be withstood even by minicomputer builders like DEC, let alone microcomputer firms.



However, there is one place that variable sector counts on magnetic media are commonly found, and have been for about the last thirty years (though it still didn't become common until well into the CDROM and Mac Floppy era): Hard drives. There's a reason that, before the rise of SSDs, there was benefit to defragmenter utilities that moved all your system files and most frequently loaded programs to the "start" of the disk: the lower numbered sectors sit at the outer edge of each platter (the outermost "cylinders" - a set of still-concentric tracks shared between platters, as all the heads move in lockstep with each other), which have more sectors per revolution than the inner cylinders (and "later" sectors)... therefore delivering a considerably higher data rate (the difference between inner and outer track radius being at least as much as on a CD) and reducing both the need for track seeking and the distance to be sought (as it takes fewer tracks to store the same amount of data). Very early drives used a fixed number of sectors per cylinder, and can often be identified by the use of pre-emphasis zones (or alternatively zones of "reduced write current", which are simply the logical inverse)... that is, a cylinder number denoting where the write signal had to be amplified in order to successfully write the same amount of data to the denser areas towards the inner part of the platters. Before too long, however, the logical sectors and tracks became divorced from the physical ones, as manufacturers took advantage of the varying writeable density of each track to both simplify the electronics (maintaining a steady write current throughout) and greatly increase the total capacity without affecting reliability or having to improve either the mechanical components of the drive or the magnetic material coating the platters. Their existing inherent greater density and speed (partly from multiple platters, wider head sweep and faster motor speed, plus higher grade control electronics, but also rigid discs with higher quality magnetic coatings and non-contact "flying" heads, all of which allowed closer-set tracks each with more sectors than a typical floppy) aided this as the sector count can be varied more finely if the midpoint of a ~2.5x range has 40 sectors vs a ~1.5x range with 10 sectors at the midpoint, and the more sophisticated controllers and tighter controlled rotational speed (synchronous direct-drive motors with near zero friction, vs factory-calibrated but otherwise unregulated, often belt-drive spindles with considerable friction from both the heads and the material of the disk envelope itself) are fertile ground for a wide ranging sector count across dozens of zones of a few tracks each with the minimum necessary slop.



And, ultimately, it's that type of tech that ended up being incorporated into the "super floppies" we did eventually get, first in the form of the Bernoulli and Syquest style hard-disc and magneto-optical disc cartridges, and then the rather truer Zip100 and LS120 "floppies", bridging the gap between regular 1.4MB DSHDs, the still extremely expensive and not at all hot swappable true hard drives, and the yet-to-mature CDRW technology.



Funnily enough, though, a few manufacturers did make continuous-spiral floppy discs... but these were all quite crude, limited capacity affairs used in niche, low cost applications, such as electronically controlled sewing machines, or "smart" word-processor-ish electric typewriters. They were essentially little better than flattened out, somewhat faster audio cassettes, as they were read or written in a single rapid swipe (there was a possibility of holding multiple files, but they would all have to be read into the machine's memory, then re-written with only the active file actually changing, so usually it was more useful to save one file on each disk), and held a few dozen kilobytes at best (again, only really useful for one file or a few small ones), though they were at least quite small (several fitting in the same volume as an audio tape; this meant a non-standard size, however), robust (moreso than a tape), and only took 20~30 seconds to read or write vs the several minutes the same data might take from a tape deck. They were a way to make a floppy drive as simply and cheaply as possible, rather than as high capacity as possible whilst still being reliable, and the head position was geared directly to the hub spindle and motor. One turn of the spindle meant the equivalent of a single head step in a conventional drive (with fewer tracks, lower rpm, and less data per revolution), and random seeking was impossible; returning to the start meant (automatically) "rewinding" the drive, and the hub had to be keyed in the same way as a 3.5 inch drive, but in that case to preserve the head position vs rotation angle relationship instead of being a way to generate a sync pulse off the motor (vs the optically-read sync hole punched into 5.25 media). Crap as it was, that was about the only practical example of true variable data density on floppy disk media (there weren't even any real "sectors", let alone a whole or fixed number per "track"), and certainly the only attempt at continuous-track recording.






share|improve this answer



















  • 1





    Mind you, it's still a damn good question, and along the lines of the kind of pie in the sky thought experiment I like to do myself... what if some particular innovation, within the reach of an older tech level but still requiring active invention, had occurred to an inventor several years before it did in this timeline? Or even just a different design philosophy (or, heck, a less poisonous and more dev-friendly management culture) had prevailed within a certain company, thus the obvious potential upgrades glaring by their absence from the historical record had actualy happened?

    – tahrey
    Jan 25 at 0:27



















30















But as the linked article indicates, while this is used on optical disks, it has generally not been used on floppy disks.




It has. Apple's famous Twiggy drive was one attempt to do so. It featured 6 different zones with 15 to 21 sectors per track. As a result some 120 additional sectors per side (or 100KiB per disk) could be used. To keep the data rate constant rotation varied between 394 (outer tracks) and 590 (inner tracks) RPM.



With the switch to 3.5 inch drives on the Lisa and later Mac this idea was the reason for Apple to develop their own format (*1) using a Sony drive. In fact, controller and parameters kept mostly the same (394-590 RPM), except now it was 8-12 sectors due the shorter track length of a 3.5 inch drive. Set off by using 80 instead of 46 tracks, so a single-sided 3.5 inch drive did hold 400 Kib, while a dual sided Twiggy had ~850 KiB.



And then there is one really widespread use of the basic idea, even predating the Twiggy: The Commodore drives (*2) starting with the PETs 2031, but most notably the 1541, all 170 KiB drives used to write different zones of 17 to 21 sectors per track. But instead of running the disk at different speeds the data rate was varied to the same result. This was possible as the controller was not only specified to work the extended range, but also was what today might be called a Software-Defined-FDC, as all data handling was done by a separate 6502 in software.



Other companies/developers toyed with the same idea as Apple or Commodore, for example the Sirius(*3) did so as well, increasing standard 500 KiB (unformatted) capacity per side to 600 KiB. And instead of asking to buy 'special' media, to make it happen, standard diskettes could be used. Then again it's not Chuck Peddle's brain child - the same man who was behind the PET development :))




Why not? They would certainly have gained great benefit from more capacity.




The benefit of some 10-15% increased capacity (*4) got several drawbacks:





  • Track switch speed (as mentioned) will increase whenever a zone border is crossed and the motor speed needs to be changed. While not as much as spinning up from stand, it may take several turns to stabilize. After all, we got real motors and real masses to accelerate or decelerate here.




    • This got offset a with the introduction of direct drive and smaller disk sizes, but not much.

    • To counter the cost, not every track was handled different, but zones where used (*5).



  • Increased cost for motor control on the drive side


  • Increased cost for motor control on the controller side

  • Introduction of a non standard interface, as the existing interface had no motor control lines beside on and off


Especially the latter was an even greater turn-off than increased cost for manufacturers. Offering more size is only a minor advertisement feat, increased cost may be handed to that (*6), but non-standard interface means hard migration into uncharted territory - even more so being tied to this non-standard manufacturer (of the drives). Nothing CEOs like.



The 3.5 inch drive itself is a major example here, as it only took off after stripping everything 'new' but the size and making it compatible with the existing 5.25 drives. Even Apple dropped at one point their scheme and changed for standard drives - much like Commodore did twice when offering the 1570/71 drives capable of reading and writing standard compatible 5.25 disks.



The alternative to modify the data rate would have offset some of the drawbacks, but required new controller chips and different analogue setup - also more delicate, as data rate and head gap are related.




Intuitively I would rather the drive spend an extra tenth of a second seeking, than require me to spend an extra ten seconds manually swapping disks because one disk didn't have enough capacity.




Sure thing - except, with a gain of only ~15% in size (*4 again), the saving isn't much and you'll swap soon again. With diskette sizes it's much like with CPU speed. Every thing less than doubling is hard to notice.





*1 - Keep in mind, this was before the standardisation of 3.5" drives.



*2 - Thanks to Felix Palmen for reminding.



*3 - Wilson digged out the corresponding patent.



*4 - Just pull out your geometry books and calculate the difference in diameter of a circle with 1.354 inches and 2.25 inches (for 5.25 inch drives) or 0.9719 inches to 1.5551 inches (for 3.5 inch drives) - or be lazy and just divide them to get a factor telling the relative length increase.



Also keep in mind that only whole blocks will work, so only if it's increased by at least ~1/16th for 5.25 or 1/8th for 3.4 a new block can be added to a track.



*5 - Which as well makes sense for blocked structures with fixed block length.



*6 - And soon offset new cheap(er) integrated solutions.






share|improve this answer





















  • 3





    @rwallace As you try to go closer to the center, you have a shorter track length so fewer sectors per track, and you need faster motor speeds to keep the same data transfer rate. The faster speeds will put more stress on the whole disk area through friction between the disk and the casing and cause more "wobble" through out of balance disks. CDs can use a larger range since they are rigid and have no mechanical contact with the disk for reading and writing,

    – alephzero
    Jan 23 at 14:08






  • 2





    @rwallace Keep in mind that a head and a head mount still needs space inside the slot. Leaving off 1.5 cm seams reasonable.

    – Raffzahn
    Jan 23 at 14:23






  • 2





    You forgot the physical size of the read/write head, which was huge compared with a hard disk head because it has to physically hold the disk materal flat. See kids.kiddle.co/Floppy_disk (near the bottom) for pictures.

    – alephzero
    Jan 23 at 14:33






  • 1





    Here's a fun story about the Twiggy drives used in the early Apple machines, and the sometimes hilarious politics within the company... folklore.org/StoryView.py?story=Hide_Under_This_Desk.txt

    – KlaymenDK
    Jan 23 at 14:56






  • 2





    Re: the Commodore drives, you're correct. See e.g. this C1541 memory map: sta.c64.org/cbm1541mem.html — check out address 1C00 and bits 5-6 which set data density. You can also check out that drive's ROM disassembly at ffd2.com/fridge/docs/1541dis.html looking at the routine from F33C, especially from "get number of sectors per track" at F348 down to actually setting the density at F35C. Density is entirely programmatic, so a fast loader or other similar thing could try to over-juice some of the shorter tracks if it wanted.

    – Tommy
    Jan 23 at 16:24





















6














During the hay-day of floppy drives the technology needed to do CLV was rather expensive. Writing/reading data at a fixed rate and running a motor at a fixes speed is the cheapest option. Variable speed oscillators were uncommon and not generally available as single integrated circuits.



At the time cost tended to be the most important factor, as computers were still very expensive and many people were still using tapes for storage, a floppy drive was relatively fast and spacious.



The gains were also somewhat marginal. Consider that Apple's expensive "Twiggy" drives. They proved unreliable and could only store 871k of data on a 5.25" disk. Sony had already released its 3.5" format two years earlier, and with double density disks computers were able to store 880k on them. They quickly became popular and cheap.



I'm not convinced about Wikipedia's claim that speed changes would have had a big influence on seek times. Floppy disks have a lot more friction and run at much lower speeds than optical media where these speed changes do reduce seek performance.






share|improve this answer





















  • 4





    Spinning up and down to speed and waiting for the drive to settle did take longer than running the drive motor at a fixed speed. Maybe it's the age of my drive but on my Apple IIgs there's a marked delay between the drive speed changing and the track stepper moving.

    – scruss
    Jan 23 at 15:46






  • 1





    Sure, but not as much as with a DVD or similar optical disc. The speed change is slow enough that you can hear it ramping up. Floppy disks rotate at a much lower speed and go from zero to operating speed in a few hundred milliseconds at most.

    – user
    Jan 23 at 17:02






  • 1





    Sony's 400k and 800k floppy drives for Apple Macintosh were zoned (had multiple speeds).

    – Whit3rd
    Jan 24 at 8:07






  • 1





    I counter-contest the idea that the spindle speed change is a major factor in optical drive seek times. Head seeking is quite slow, sometimes slower than that of a floppy drive, and it may take a few revolutions to actually sync to the right radial position by comparing the desired sector number to what's passing under the head and fine-tuning back and forth. In contrast, high speed modes (>24x) are all CAV i.e. CONSTANT motor speed, slower CLV ones don't change by more than 2.5x from inner (high rpm) to outermost (low), and data tranfer proper is clocked by the hard-formatted groove wobble...

    – tahrey
    Jan 25 at 0:19






  • 1





    (Thus your optical drive doesn't actually need to spin down all the way before it can start reading from its new further-out head position, and doesn't need to spin up at all for a new further-in one; so long as the resulting raw data rate is within the limits of what the decoder hardware can interpret and buffer for output, it can start reading immediately the correct starting sector is identified, regardless of the spindle speed, and continue doing so as the motor adjusts to the set target speed. And, of course, most of the rpm change can be managed during the head seek anyway...)

    – tahrey
    Jan 25 at 0:22



















5














At least one computer does (something close to) what you are describing, and there is a relevant patent.



Wikipedia claims:




But disks made at constant bit density were not compatible with machines with standard drives.




And this is apparently supported by a dead-tree citation. I read the sentence as meaning that the drives need special disks. So my guess is that because these drives never caught on, the disks didn't -- it was a chicken-and-egg problem that caused this implementation of the technique not to gain much market share or traction in the industry.



And perhaps because of the patent, no-one else tried to make a CLV floppy drive as far as I can tell.






share|improve this answer





















  • 1





    Good find, thanks! I would expect the sentence to mean you can use the same blank disks, but disks written on one kind of drive cannot be read on the other kind.

    – rwallace
    Jan 23 at 12:21






  • 1





    AFAIR Apples Twiggy drive did so before.

    – Raffzahn
    Jan 23 at 13:45






  • 1





    @Raffzahn before the Sirius 9000? So why did they award the patent?

    – Wilson
    Jan 23 at 13:57






  • 3





    @Wilson That's something you may want to ask the patent office :)) According to Andy Herzfelds story the Twiggy drives where developed in 1981. The Lisa (with twiggys) went on sale in January 1983. The Patent was filed in October 1982. It's safe to assume that the twiggies weren't developed 3 month from idea to deivery, so I guess it's eitehr invalid by prior atr, or it's claims are more subtile on vertain parts of the logic - something different from the wy Apple (or commodore) did it.

    – Raffzahn
    Jan 23 at 14:09






  • 2





    According to this (p. 1-2, § 1.3), "the Victor 9000 uses 5 1/4-inch minifloppies of a similar type to those used in other computers." It goes on to imply that the disks are physically compatible but states that they are not interchangeable with other systems due to the wildly different formatting (variable rotation speed).

    – Alex Hajnal
    Jan 23 at 23:06





















3














The 800 KB second generation Macintosh 3.5 inch floppy had CV zones (Wikipedia: Floppy disk). When the next gen floppy (1.44 MB) came out, it didn't use the CV technology, but was supported on many different OSes, relegating the CV version to the back of the bookshelf.



You could hear the speed change if you listened closely.






share|improve this answer





















  • 2





    As did the Apple IIgs 800 K drive

    – scruss
    Jan 24 at 1:21






  • 1





    Also the single-sided 400k Mac floppy, 1984 through 1986.

    – Whit3rd
    Jan 24 at 8:10



















3














Something nobody else has mentioned is that the nature of data stored between optical media and magnetic media was very different at the time.



During the time of floppies optical disks were mostly write once media that contained very long, sequential files. Mostly music (CDs) or video (Laserdisc). Data disks did start becoming popular towards then end of the floppy era but they were still write-once, and often structured as large packed data files.



Write once is important, since it implies there is no disk fragmentation. Disk fragmentation would be a serious issue for CLV drives, since it could require multiple significant changes to rotation speed while reading even a small file. Large sequential files, as were common on optical media, meant seeks were rare (and consequentially significant changes to rotation speed were rare).






share|improve this answer































    1














    Another point not yet mentioned is that for constant linear velocity to offer the most benefit, it is necessary to use a spiral track rather than a series of rings. Every separately-writable sector on a disk has a significant amount of overhead, so using larger sectors will improve storage efficiency. If one uses constant-linear-velocity storage with consecutive rings, however, each track will lose an average of half a sector because there's no way to store half a sector on a track. Using a spiral will eliminate any need to have an integer number of sectors per revolution.



    I've read of some copy protection schemes for the Apple II wrote information in a spiral. I suspect they did so more for purposes of thwarting piracy rather than enhancing storage density, but I suspect that a disk operating system that was limited to loading and storing 32Kbyte chunks could probably fit six spiral-written chunks on what would normally be a 140K floppy.






    share|improve this answer























      Your Answer








      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "648"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f8911%2fwhy-not-constant-linear-velocity-floppies%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      7 Answers
      7






      active

      oldest

      votes








      7 Answers
      7






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      2














      Answering the "why not", as the "actually, there was" has already been covered: Essentially, it's because of the generally small number of sectors per track on a floppy disk, or in other words the much more limited amount of data per revolution, and the rather smaller variation in track length per revolution. Also, as has been said above, the fact that the tracks are concentric, thus with a whole number of sectors per revolution, rather than being a continuous spiral where each sector can start and end at any arbitrary angle vs a fixed point of reference.



      This both makes it less straightforward to engineer - you can't simply make a smooth variation of motor speed vs head position, but have to separate the disk into multiple speed zones (the smallest/largest number of tracks you may have is 35~84, and number of sectors about 8~21, with a particular disk size and coating formulation normalising around a fairly tight range, so each zone needs to be at the very least three tracks wide and could well spread over ten or even twenty tracks), each demanding just as tight motor control as a simpler single-speed mechanism, and for the hardware (not just software) to maintain absolute certainty over which track the head is currently sitting over - and limits the potential benefit of the technique.



      For example, the Apple drives pulled about an extra 11% out of each disk vs the MSDOS standard, as they had to reduce the sector counts on the inner tracks to account for less than rigid motor speed control across the various zones; the Amiga, and common custom formats on the Atari ST, as well as Microsoft's own DMF system-disk format, achieved similar or better capacity on ordinary disks, at least for reading (and for writing with all but the sloppiest, sub-average rpm drives) with single-speed/CAV recording just by increasing the number of sectors on all tracks (e.g. 10 sectors instead of 9) and tightening up the inter-sector and end-of-track timings.



      (The sector counts have to be varied by changing the motor speed rather than, as with high speed CDRW/DVDRW/BDRW drives or hard drives, holding the motor speed steady and varying the data rate because the floppy controller chips can generally only operate at one or two fixed rates, with their clock input being - at least in older machines - either locked to the system clock or a separate crystal that's the fastest one in the entire machine, so can't be finely subdivided to a variety of slightly-different rates (it's either 1:1, or 1/2...), nor multiply up from a lower frequency using a PLL. Optical drives and hard drives take their clock from the pre-hard-formatted media itself, but soft-formatted floppies have to use a reference within the computer itself)



      The actual recorded area of a floppy disk is quite obvious - it's a little narrower than the "window" of a 5.25 or 8 inch envelope, or that opens up on a 3.5 (or 3.0, 2.5...) inch type. Its boundaries are a good way in both from the physical edge of the disc proper, and the hub, and their radii don't vary a huge amount in comparison to that of the middle track. If we go by Apple's example, it can be assumed that 3.5 inch disks exhibit about a 1.5:1 variation between the innermost and outermost tracks, and perhaps even less (about 1.4:1) on 5.25 inchers. The notional amount of wastage with CAV recording is quite low - if you push the limits, you might get 20 to 25% extra, but realistically no more than 15%, which wasn't worth the bother (and considerable extra cost) to most manufacturers who didn't have a hardware design savant sitting in one of the founders' seats like Apple (even Commodore, who had their own IC fabs and other first-party hardware factories didn't much bother with the idea).



      On a typical optical disc, however, almost the entire visible surface area is available for data storage, from a few millimetres out from the transparent hub through to a millimetre or so from the outer edge. The speed of an envelope-pushing 81 minute music CD varies by almost a full 2.5x as it plays the audio data out at a steady speed with no meaningful buffering, implying an outer radius nearly 2.5x that of the innermost, and DVDRs written within safe limits (avoiding the last millimetre or so where error rates skyrocket) show a 2.4x speed variation when running in variable data-rate CAV mode. Therefore if you were to operate in pure CAV, fixed sector count per revolution mode with those, you would lose a significant amount of the total capacity, easily a third or more, which would mean the difference between storing 75 minutes, or just 50 minutes. This loss can actually be seen with CAV Laserdiscs (tuned for steady freezeframe, or storing thousands of still photos rather than the maximum amount of analogue video... which is a little strange because their analogue nature would allow cramming in more sectors to the inner tracks at the expense of some horizontal resolution) which have a similar inner/outer radius ratio to CDs and show a noticeably lower runtime, and in the early "PacketCD" standards for floppy-like addressing of CDRWs (with fixed numbers of sectors per revolution, Z-CAV speed control, and a larger than normal gap between sectors, all compensating for the difficulty of accurately rewriting individual sectors in a continual-spiral disc format never intended for anything other than one-time writing of single large sessions consisting of many thousands of sectors) which saw the recordable capacity of a 650+ MB CDRW fall to barely more than 500MB, and a 700MB disc to about 530MB.



      The latter examples are also an answer to why we don't use continuous spirals for floppies either; it's just too complicated, from an engineering perspective. The finesse of control exerted in CD transports in terms of head positioning was simply exquisite by the terms of the early 80s and easily counted for as much of the thousand-plus dollar selling price of the first players as the actual laser diodes, the high-data-rate decoder circuitry and ultra high fidelity analogue output stage. A compact disc spins at about the same average speed as a floppy disc (whilst delivering 10x the data rate of even the fastest floppies, and more like 80x that of a turn-of-the-80s model), but moves the equivalent of one track width per rotation (as the read-out is nonstop, unlike most floppies, which usually need at least 3 revolutions to read a full, single-sided track of data and potentially as many as 21)... and can carry on doing that for anything up to 80 minutes (whereas most floppies can be fully read in under 5 minutes, sometimes less than 1 minute). It might average about 375rpm over those 80 minutes, so the head needs to be able to seek between at least 30,000 individual, microscopic tracks (across a start-to-end width of maybe 2 inches max), and that's if we assume the laser head's groove-following abilities have enough swing to cover half a track width one way or the other instead of the head sled having to step 12.5 or so half-tracks per second, or even 25 quarter-tracks. A floppy drive, as stated, only needing the ability to step somewhat coarsely between 35 to 84 tracks over a slightly narrower sweep, which is a large enough distance that the mechanism can be clearly seen moving from one track to the next.



      And, of course, to maintain continual tracking (but still with random-access abilities), the RW head mechanism would either have to be stepped whole rather more finely (say, a tiny tick between each sector), or be equipped with a similar track-following servo mechanism that electromagnetically (problematic for magnetic media...) swings the actual coils side to side within the frame...



      Considering how much even mundane floppy drives cost during the era when that sort of advance would actually have been useful, the necessary engineering upgrades to enable it would have been absolutely prohibitive. Maybe the sort of thing IBM would have indulged in for the drives attached to their mainframes, but unlikely to be withstood even by minicomputer builders like DEC, let alone microcomputer firms.



      However, there is one place that variable sector counts on magnetic media are commonly found, and have been for about the last thirty years (though it still didn't become common until well into the CDROM and Mac Floppy era): Hard drives. There's a reason that, before the rise of SSDs, there was benefit to defragmenter utilities that moved all your system files and most frequently loaded programs to the "start" of the disk: the lower numbered sectors sit at the outer edge of each platter (the outermost "cylinders" - a set of still-concentric tracks shared between platters, as all the heads move in lockstep with each other), which have more sectors per revolution than the inner cylinders (and "later" sectors)... therefore delivering a considerably higher data rate (the difference between inner and outer track radius being at least as much as on a CD) and reducing both the need for track seeking and the distance to be sought (as it takes fewer tracks to store the same amount of data). Very early drives used a fixed number of sectors per cylinder, and can often be identified by the use of pre-emphasis zones (or alternatively zones of "reduced write current", which are simply the logical inverse)... that is, a cylinder number denoting where the write signal had to be amplified in order to successfully write the same amount of data to the denser areas towards the inner part of the platters. Before too long, however, the logical sectors and tracks became divorced from the physical ones, as manufacturers took advantage of the varying writeable density of each track to both simplify the electronics (maintaining a steady write current throughout) and greatly increase the total capacity without affecting reliability or having to improve either the mechanical components of the drive or the magnetic material coating the platters. Their existing inherent greater density and speed (partly from multiple platters, wider head sweep and faster motor speed, plus higher grade control electronics, but also rigid discs with higher quality magnetic coatings and non-contact "flying" heads, all of which allowed closer-set tracks each with more sectors than a typical floppy) aided this as the sector count can be varied more finely if the midpoint of a ~2.5x range has 40 sectors vs a ~1.5x range with 10 sectors at the midpoint, and the more sophisticated controllers and tighter controlled rotational speed (synchronous direct-drive motors with near zero friction, vs factory-calibrated but otherwise unregulated, often belt-drive spindles with considerable friction from both the heads and the material of the disk envelope itself) are fertile ground for a wide ranging sector count across dozens of zones of a few tracks each with the minimum necessary slop.



      And, ultimately, it's that type of tech that ended up being incorporated into the "super floppies" we did eventually get, first in the form of the Bernoulli and Syquest style hard-disc and magneto-optical disc cartridges, and then the rather truer Zip100 and LS120 "floppies", bridging the gap between regular 1.4MB DSHDs, the still extremely expensive and not at all hot swappable true hard drives, and the yet-to-mature CDRW technology.



      Funnily enough, though, a few manufacturers did make continuous-spiral floppy discs... but these were all quite crude, limited capacity affairs used in niche, low cost applications, such as electronically controlled sewing machines, or "smart" word-processor-ish electric typewriters. They were essentially little better than flattened out, somewhat faster audio cassettes, as they were read or written in a single rapid swipe (there was a possibility of holding multiple files, but they would all have to be read into the machine's memory, then re-written with only the active file actually changing, so usually it was more useful to save one file on each disk), and held a few dozen kilobytes at best (again, only really useful for one file or a few small ones), though they were at least quite small (several fitting in the same volume as an audio tape; this meant a non-standard size, however), robust (moreso than a tape), and only took 20~30 seconds to read or write vs the several minutes the same data might take from a tape deck. They were a way to make a floppy drive as simply and cheaply as possible, rather than as high capacity as possible whilst still being reliable, and the head position was geared directly to the hub spindle and motor. One turn of the spindle meant the equivalent of a single head step in a conventional drive (with fewer tracks, lower rpm, and less data per revolution), and random seeking was impossible; returning to the start meant (automatically) "rewinding" the drive, and the hub had to be keyed in the same way as a 3.5 inch drive, but in that case to preserve the head position vs rotation angle relationship instead of being a way to generate a sync pulse off the motor (vs the optically-read sync hole punched into 5.25 media). Crap as it was, that was about the only practical example of true variable data density on floppy disk media (there weren't even any real "sectors", let alone a whole or fixed number per "track"), and certainly the only attempt at continuous-track recording.






      share|improve this answer



















      • 1





        Mind you, it's still a damn good question, and along the lines of the kind of pie in the sky thought experiment I like to do myself... what if some particular innovation, within the reach of an older tech level but still requiring active invention, had occurred to an inventor several years before it did in this timeline? Or even just a different design philosophy (or, heck, a less poisonous and more dev-friendly management culture) had prevailed within a certain company, thus the obvious potential upgrades glaring by their absence from the historical record had actualy happened?

        – tahrey
        Jan 25 at 0:27
















      2














      Answering the "why not", as the "actually, there was" has already been covered: Essentially, it's because of the generally small number of sectors per track on a floppy disk, or in other words the much more limited amount of data per revolution, and the rather smaller variation in track length per revolution. Also, as has been said above, the fact that the tracks are concentric, thus with a whole number of sectors per revolution, rather than being a continuous spiral where each sector can start and end at any arbitrary angle vs a fixed point of reference.



      This both makes it less straightforward to engineer - you can't simply make a smooth variation of motor speed vs head position, but have to separate the disk into multiple speed zones (the smallest/largest number of tracks you may have is 35~84, and number of sectors about 8~21, with a particular disk size and coating formulation normalising around a fairly tight range, so each zone needs to be at the very least three tracks wide and could well spread over ten or even twenty tracks), each demanding just as tight motor control as a simpler single-speed mechanism, and for the hardware (not just software) to maintain absolute certainty over which track the head is currently sitting over - and limits the potential benefit of the technique.



      For example, the Apple drives pulled about an extra 11% out of each disk vs the MSDOS standard, as they had to reduce the sector counts on the inner tracks to account for less than rigid motor speed control across the various zones; the Amiga, and common custom formats on the Atari ST, as well as Microsoft's own DMF system-disk format, achieved similar or better capacity on ordinary disks, at least for reading (and for writing with all but the sloppiest, sub-average rpm drives) with single-speed/CAV recording just by increasing the number of sectors on all tracks (e.g. 10 sectors instead of 9) and tightening up the inter-sector and end-of-track timings.



      (The sector counts have to be varied by changing the motor speed rather than, as with high speed CDRW/DVDRW/BDRW drives or hard drives, holding the motor speed steady and varying the data rate because the floppy controller chips can generally only operate at one or two fixed rates, with their clock input being - at least in older machines - either locked to the system clock or a separate crystal that's the fastest one in the entire machine, so can't be finely subdivided to a variety of slightly-different rates (it's either 1:1, or 1/2...), nor multiply up from a lower frequency using a PLL. Optical drives and hard drives take their clock from the pre-hard-formatted media itself, but soft-formatted floppies have to use a reference within the computer itself)



      The actual recorded area of a floppy disk is quite obvious - it's a little narrower than the "window" of a 5.25 or 8 inch envelope, or that opens up on a 3.5 (or 3.0, 2.5...) inch type. Its boundaries are a good way in both from the physical edge of the disc proper, and the hub, and their radii don't vary a huge amount in comparison to that of the middle track. If we go by Apple's example, it can be assumed that 3.5 inch disks exhibit about a 1.5:1 variation between the innermost and outermost tracks, and perhaps even less (about 1.4:1) on 5.25 inchers. The notional amount of wastage with CAV recording is quite low - if you push the limits, you might get 20 to 25% extra, but realistically no more than 15%, which wasn't worth the bother (and considerable extra cost) to most manufacturers who didn't have a hardware design savant sitting in one of the founders' seats like Apple (even Commodore, who had their own IC fabs and other first-party hardware factories didn't much bother with the idea).



      On a typical optical disc, however, almost the entire visible surface area is available for data storage, from a few millimetres out from the transparent hub through to a millimetre or so from the outer edge. The speed of an envelope-pushing 81 minute music CD varies by almost a full 2.5x as it plays the audio data out at a steady speed with no meaningful buffering, implying an outer radius nearly 2.5x that of the innermost, and DVDRs written within safe limits (avoiding the last millimetre or so where error rates skyrocket) show a 2.4x speed variation when running in variable data-rate CAV mode. Therefore if you were to operate in pure CAV, fixed sector count per revolution mode with those, you would lose a significant amount of the total capacity, easily a third or more, which would mean the difference between storing 75 minutes, or just 50 minutes. This loss can actually be seen with CAV Laserdiscs (tuned for steady freezeframe, or storing thousands of still photos rather than the maximum amount of analogue video... which is a little strange because their analogue nature would allow cramming in more sectors to the inner tracks at the expense of some horizontal resolution) which have a similar inner/outer radius ratio to CDs and show a noticeably lower runtime, and in the early "PacketCD" standards for floppy-like addressing of CDRWs (with fixed numbers of sectors per revolution, Z-CAV speed control, and a larger than normal gap between sectors, all compensating for the difficulty of accurately rewriting individual sectors in a continual-spiral disc format never intended for anything other than one-time writing of single large sessions consisting of many thousands of sectors) which saw the recordable capacity of a 650+ MB CDRW fall to barely more than 500MB, and a 700MB disc to about 530MB.



      The latter examples are also an answer to why we don't use continuous spirals for floppies either; it's just too complicated, from an engineering perspective. The finesse of control exerted in CD transports in terms of head positioning was simply exquisite by the terms of the early 80s and easily counted for as much of the thousand-plus dollar selling price of the first players as the actual laser diodes, the high-data-rate decoder circuitry and ultra high fidelity analogue output stage. A compact disc spins at about the same average speed as a floppy disc (whilst delivering 10x the data rate of even the fastest floppies, and more like 80x that of a turn-of-the-80s model), but moves the equivalent of one track width per rotation (as the read-out is nonstop, unlike most floppies, which usually need at least 3 revolutions to read a full, single-sided track of data and potentially as many as 21)... and can carry on doing that for anything up to 80 minutes (whereas most floppies can be fully read in under 5 minutes, sometimes less than 1 minute). It might average about 375rpm over those 80 minutes, so the head needs to be able to seek between at least 30,000 individual, microscopic tracks (across a start-to-end width of maybe 2 inches max), and that's if we assume the laser head's groove-following abilities have enough swing to cover half a track width one way or the other instead of the head sled having to step 12.5 or so half-tracks per second, or even 25 quarter-tracks. A floppy drive, as stated, only needing the ability to step somewhat coarsely between 35 to 84 tracks over a slightly narrower sweep, which is a large enough distance that the mechanism can be clearly seen moving from one track to the next.



      And, of course, to maintain continual tracking (but still with random-access abilities), the RW head mechanism would either have to be stepped whole rather more finely (say, a tiny tick between each sector), or be equipped with a similar track-following servo mechanism that electromagnetically (problematic for magnetic media...) swings the actual coils side to side within the frame...



      Considering how much even mundane floppy drives cost during the era when that sort of advance would actually have been useful, the necessary engineering upgrades to enable it would have been absolutely prohibitive. Maybe the sort of thing IBM would have indulged in for the drives attached to their mainframes, but unlikely to be withstood even by minicomputer builders like DEC, let alone microcomputer firms.



      However, there is one place that variable sector counts on magnetic media are commonly found, and have been for about the last thirty years (though it still didn't become common until well into the CDROM and Mac Floppy era): Hard drives. There's a reason that, before the rise of SSDs, there was benefit to defragmenter utilities that moved all your system files and most frequently loaded programs to the "start" of the disk: the lower numbered sectors sit at the outer edge of each platter (the outermost "cylinders" - a set of still-concentric tracks shared between platters, as all the heads move in lockstep with each other), which have more sectors per revolution than the inner cylinders (and "later" sectors)... therefore delivering a considerably higher data rate (the difference between inner and outer track radius being at least as much as on a CD) and reducing both the need for track seeking and the distance to be sought (as it takes fewer tracks to store the same amount of data). Very early drives used a fixed number of sectors per cylinder, and can often be identified by the use of pre-emphasis zones (or alternatively zones of "reduced write current", which are simply the logical inverse)... that is, a cylinder number denoting where the write signal had to be amplified in order to successfully write the same amount of data to the denser areas towards the inner part of the platters. Before too long, however, the logical sectors and tracks became divorced from the physical ones, as manufacturers took advantage of the varying writeable density of each track to both simplify the electronics (maintaining a steady write current throughout) and greatly increase the total capacity without affecting reliability or having to improve either the mechanical components of the drive or the magnetic material coating the platters. Their existing inherent greater density and speed (partly from multiple platters, wider head sweep and faster motor speed, plus higher grade control electronics, but also rigid discs with higher quality magnetic coatings and non-contact "flying" heads, all of which allowed closer-set tracks each with more sectors than a typical floppy) aided this as the sector count can be varied more finely if the midpoint of a ~2.5x range has 40 sectors vs a ~1.5x range with 10 sectors at the midpoint, and the more sophisticated controllers and tighter controlled rotational speed (synchronous direct-drive motors with near zero friction, vs factory-calibrated but otherwise unregulated, often belt-drive spindles with considerable friction from both the heads and the material of the disk envelope itself) are fertile ground for a wide ranging sector count across dozens of zones of a few tracks each with the minimum necessary slop.



      And, ultimately, it's that type of tech that ended up being incorporated into the "super floppies" we did eventually get, first in the form of the Bernoulli and Syquest style hard-disc and magneto-optical disc cartridges, and then the rather truer Zip100 and LS120 "floppies", bridging the gap between regular 1.4MB DSHDs, the still extremely expensive and not at all hot swappable true hard drives, and the yet-to-mature CDRW technology.



      Funnily enough, though, a few manufacturers did make continuous-spiral floppy discs... but these were all quite crude, limited capacity affairs used in niche, low cost applications, such as electronically controlled sewing machines, or "smart" word-processor-ish electric typewriters. They were essentially little better than flattened out, somewhat faster audio cassettes, as they were read or written in a single rapid swipe (there was a possibility of holding multiple files, but they would all have to be read into the machine's memory, then re-written with only the active file actually changing, so usually it was more useful to save one file on each disk), and held a few dozen kilobytes at best (again, only really useful for one file or a few small ones), though they were at least quite small (several fitting in the same volume as an audio tape; this meant a non-standard size, however), robust (moreso than a tape), and only took 20~30 seconds to read or write vs the several minutes the same data might take from a tape deck. They were a way to make a floppy drive as simply and cheaply as possible, rather than as high capacity as possible whilst still being reliable, and the head position was geared directly to the hub spindle and motor. One turn of the spindle meant the equivalent of a single head step in a conventional drive (with fewer tracks, lower rpm, and less data per revolution), and random seeking was impossible; returning to the start meant (automatically) "rewinding" the drive, and the hub had to be keyed in the same way as a 3.5 inch drive, but in that case to preserve the head position vs rotation angle relationship instead of being a way to generate a sync pulse off the motor (vs the optically-read sync hole punched into 5.25 media). Crap as it was, that was about the only practical example of true variable data density on floppy disk media (there weren't even any real "sectors", let alone a whole or fixed number per "track"), and certainly the only attempt at continuous-track recording.






      share|improve this answer



















      • 1





        Mind you, it's still a damn good question, and along the lines of the kind of pie in the sky thought experiment I like to do myself... what if some particular innovation, within the reach of an older tech level but still requiring active invention, had occurred to an inventor several years before it did in this timeline? Or even just a different design philosophy (or, heck, a less poisonous and more dev-friendly management culture) had prevailed within a certain company, thus the obvious potential upgrades glaring by their absence from the historical record had actualy happened?

        – tahrey
        Jan 25 at 0:27














      2












      2








      2







      Answering the "why not", as the "actually, there was" has already been covered: Essentially, it's because of the generally small number of sectors per track on a floppy disk, or in other words the much more limited amount of data per revolution, and the rather smaller variation in track length per revolution. Also, as has been said above, the fact that the tracks are concentric, thus with a whole number of sectors per revolution, rather than being a continuous spiral where each sector can start and end at any arbitrary angle vs a fixed point of reference.



      This both makes it less straightforward to engineer - you can't simply make a smooth variation of motor speed vs head position, but have to separate the disk into multiple speed zones (the smallest/largest number of tracks you may have is 35~84, and number of sectors about 8~21, with a particular disk size and coating formulation normalising around a fairly tight range, so each zone needs to be at the very least three tracks wide and could well spread over ten or even twenty tracks), each demanding just as tight motor control as a simpler single-speed mechanism, and for the hardware (not just software) to maintain absolute certainty over which track the head is currently sitting over - and limits the potential benefit of the technique.



      For example, the Apple drives pulled about an extra 11% out of each disk vs the MSDOS standard, as they had to reduce the sector counts on the inner tracks to account for less than rigid motor speed control across the various zones; the Amiga, and common custom formats on the Atari ST, as well as Microsoft's own DMF system-disk format, achieved similar or better capacity on ordinary disks, at least for reading (and for writing with all but the sloppiest, sub-average rpm drives) with single-speed/CAV recording just by increasing the number of sectors on all tracks (e.g. 10 sectors instead of 9) and tightening up the inter-sector and end-of-track timings.



      (The sector counts have to be varied by changing the motor speed rather than, as with high speed CDRW/DVDRW/BDRW drives or hard drives, holding the motor speed steady and varying the data rate because the floppy controller chips can generally only operate at one or two fixed rates, with their clock input being - at least in older machines - either locked to the system clock or a separate crystal that's the fastest one in the entire machine, so can't be finely subdivided to a variety of slightly-different rates (it's either 1:1, or 1/2...), nor multiply up from a lower frequency using a PLL. Optical drives and hard drives take their clock from the pre-hard-formatted media itself, but soft-formatted floppies have to use a reference within the computer itself)



      The actual recorded area of a floppy disk is quite obvious - it's a little narrower than the "window" of a 5.25 or 8 inch envelope, or that opens up on a 3.5 (or 3.0, 2.5...) inch type. Its boundaries are a good way in both from the physical edge of the disc proper, and the hub, and their radii don't vary a huge amount in comparison to that of the middle track. If we go by Apple's example, it can be assumed that 3.5 inch disks exhibit about a 1.5:1 variation between the innermost and outermost tracks, and perhaps even less (about 1.4:1) on 5.25 inchers. The notional amount of wastage with CAV recording is quite low - if you push the limits, you might get 20 to 25% extra, but realistically no more than 15%, which wasn't worth the bother (and considerable extra cost) to most manufacturers who didn't have a hardware design savant sitting in one of the founders' seats like Apple (even Commodore, who had their own IC fabs and other first-party hardware factories didn't much bother with the idea).



      On a typical optical disc, however, almost the entire visible surface area is available for data storage, from a few millimetres out from the transparent hub through to a millimetre or so from the outer edge. The speed of an envelope-pushing 81 minute music CD varies by almost a full 2.5x as it plays the audio data out at a steady speed with no meaningful buffering, implying an outer radius nearly 2.5x that of the innermost, and DVDRs written within safe limits (avoiding the last millimetre or so where error rates skyrocket) show a 2.4x speed variation when running in variable data-rate CAV mode. Therefore if you were to operate in pure CAV, fixed sector count per revolution mode with those, you would lose a significant amount of the total capacity, easily a third or more, which would mean the difference between storing 75 minutes, or just 50 minutes. This loss can actually be seen with CAV Laserdiscs (tuned for steady freezeframe, or storing thousands of still photos rather than the maximum amount of analogue video... which is a little strange because their analogue nature would allow cramming in more sectors to the inner tracks at the expense of some horizontal resolution) which have a similar inner/outer radius ratio to CDs and show a noticeably lower runtime, and in the early "PacketCD" standards for floppy-like addressing of CDRWs (with fixed numbers of sectors per revolution, Z-CAV speed control, and a larger than normal gap between sectors, all compensating for the difficulty of accurately rewriting individual sectors in a continual-spiral disc format never intended for anything other than one-time writing of single large sessions consisting of many thousands of sectors) which saw the recordable capacity of a 650+ MB CDRW fall to barely more than 500MB, and a 700MB disc to about 530MB.



      The latter examples are also an answer to why we don't use continuous spirals for floppies either; it's just too complicated, from an engineering perspective. The finesse of control exerted in CD transports in terms of head positioning was simply exquisite by the terms of the early 80s and easily counted for as much of the thousand-plus dollar selling price of the first players as the actual laser diodes, the high-data-rate decoder circuitry and ultra high fidelity analogue output stage. A compact disc spins at about the same average speed as a floppy disc (whilst delivering 10x the data rate of even the fastest floppies, and more like 80x that of a turn-of-the-80s model), but moves the equivalent of one track width per rotation (as the read-out is nonstop, unlike most floppies, which usually need at least 3 revolutions to read a full, single-sided track of data and potentially as many as 21)... and can carry on doing that for anything up to 80 minutes (whereas most floppies can be fully read in under 5 minutes, sometimes less than 1 minute). It might average about 375rpm over those 80 minutes, so the head needs to be able to seek between at least 30,000 individual, microscopic tracks (across a start-to-end width of maybe 2 inches max), and that's if we assume the laser head's groove-following abilities have enough swing to cover half a track width one way or the other instead of the head sled having to step 12.5 or so half-tracks per second, or even 25 quarter-tracks. A floppy drive, as stated, only needing the ability to step somewhat coarsely between 35 to 84 tracks over a slightly narrower sweep, which is a large enough distance that the mechanism can be clearly seen moving from one track to the next.



      And, of course, to maintain continual tracking (but still with random-access abilities), the RW head mechanism would either have to be stepped whole rather more finely (say, a tiny tick between each sector), or be equipped with a similar track-following servo mechanism that electromagnetically (problematic for magnetic media...) swings the actual coils side to side within the frame...



      Considering how much even mundane floppy drives cost during the era when that sort of advance would actually have been useful, the necessary engineering upgrades to enable it would have been absolutely prohibitive. Maybe the sort of thing IBM would have indulged in for the drives attached to their mainframes, but unlikely to be withstood even by minicomputer builders like DEC, let alone microcomputer firms.



      However, there is one place that variable sector counts on magnetic media are commonly found, and have been for about the last thirty years (though it still didn't become common until well into the CDROM and Mac Floppy era): Hard drives. There's a reason that, before the rise of SSDs, there was benefit to defragmenter utilities that moved all your system files and most frequently loaded programs to the "start" of the disk: the lower numbered sectors sit at the outer edge of each platter (the outermost "cylinders" - a set of still-concentric tracks shared between platters, as all the heads move in lockstep with each other), which have more sectors per revolution than the inner cylinders (and "later" sectors)... therefore delivering a considerably higher data rate (the difference between inner and outer track radius being at least as much as on a CD) and reducing both the need for track seeking and the distance to be sought (as it takes fewer tracks to store the same amount of data). Very early drives used a fixed number of sectors per cylinder, and can often be identified by the use of pre-emphasis zones (or alternatively zones of "reduced write current", which are simply the logical inverse)... that is, a cylinder number denoting where the write signal had to be amplified in order to successfully write the same amount of data to the denser areas towards the inner part of the platters. Before too long, however, the logical sectors and tracks became divorced from the physical ones, as manufacturers took advantage of the varying writeable density of each track to both simplify the electronics (maintaining a steady write current throughout) and greatly increase the total capacity without affecting reliability or having to improve either the mechanical components of the drive or the magnetic material coating the platters. Their existing inherent greater density and speed (partly from multiple platters, wider head sweep and faster motor speed, plus higher grade control electronics, but also rigid discs with higher quality magnetic coatings and non-contact "flying" heads, all of which allowed closer-set tracks each with more sectors than a typical floppy) aided this as the sector count can be varied more finely if the midpoint of a ~2.5x range has 40 sectors vs a ~1.5x range with 10 sectors at the midpoint, and the more sophisticated controllers and tighter controlled rotational speed (synchronous direct-drive motors with near zero friction, vs factory-calibrated but otherwise unregulated, often belt-drive spindles with considerable friction from both the heads and the material of the disk envelope itself) are fertile ground for a wide ranging sector count across dozens of zones of a few tracks each with the minimum necessary slop.



      And, ultimately, it's that type of tech that ended up being incorporated into the "super floppies" we did eventually get, first in the form of the Bernoulli and Syquest style hard-disc and magneto-optical disc cartridges, and then the rather truer Zip100 and LS120 "floppies", bridging the gap between regular 1.4MB DSHDs, the still extremely expensive and not at all hot swappable true hard drives, and the yet-to-mature CDRW technology.



      Funnily enough, though, a few manufacturers did make continuous-spiral floppy discs... but these were all quite crude, limited capacity affairs used in niche, low cost applications, such as electronically controlled sewing machines, or "smart" word-processor-ish electric typewriters. They were essentially little better than flattened out, somewhat faster audio cassettes, as they were read or written in a single rapid swipe (there was a possibility of holding multiple files, but they would all have to be read into the machine's memory, then re-written with only the active file actually changing, so usually it was more useful to save one file on each disk), and held a few dozen kilobytes at best (again, only really useful for one file or a few small ones), though they were at least quite small (several fitting in the same volume as an audio tape; this meant a non-standard size, however), robust (moreso than a tape), and only took 20~30 seconds to read or write vs the several minutes the same data might take from a tape deck. They were a way to make a floppy drive as simply and cheaply as possible, rather than as high capacity as possible whilst still being reliable, and the head position was geared directly to the hub spindle and motor. One turn of the spindle meant the equivalent of a single head step in a conventional drive (with fewer tracks, lower rpm, and less data per revolution), and random seeking was impossible; returning to the start meant (automatically) "rewinding" the drive, and the hub had to be keyed in the same way as a 3.5 inch drive, but in that case to preserve the head position vs rotation angle relationship instead of being a way to generate a sync pulse off the motor (vs the optically-read sync hole punched into 5.25 media). Crap as it was, that was about the only practical example of true variable data density on floppy disk media (there weren't even any real "sectors", let alone a whole or fixed number per "track"), and certainly the only attempt at continuous-track recording.






      share|improve this answer













      Answering the "why not", as the "actually, there was" has already been covered: Essentially, it's because of the generally small number of sectors per track on a floppy disk, or in other words the much more limited amount of data per revolution, and the rather smaller variation in track length per revolution. Also, as has been said above, the fact that the tracks are concentric, thus with a whole number of sectors per revolution, rather than being a continuous spiral where each sector can start and end at any arbitrary angle vs a fixed point of reference.



      This both makes it less straightforward to engineer - you can't simply make a smooth variation of motor speed vs head position, but have to separate the disk into multiple speed zones (the smallest/largest number of tracks you may have is 35~84, and number of sectors about 8~21, with a particular disk size and coating formulation normalising around a fairly tight range, so each zone needs to be at the very least three tracks wide and could well spread over ten or even twenty tracks), each demanding just as tight motor control as a simpler single-speed mechanism, and for the hardware (not just software) to maintain absolute certainty over which track the head is currently sitting over - and limits the potential benefit of the technique.



      For example, the Apple drives pulled about an extra 11% out of each disk vs the MSDOS standard, as they had to reduce the sector counts on the inner tracks to account for less than rigid motor speed control across the various zones; the Amiga, and common custom formats on the Atari ST, as well as Microsoft's own DMF system-disk format, achieved similar or better capacity on ordinary disks, at least for reading (and for writing with all but the sloppiest, sub-average rpm drives) with single-speed/CAV recording just by increasing the number of sectors on all tracks (e.g. 10 sectors instead of 9) and tightening up the inter-sector and end-of-track timings.



      (The sector counts have to be varied by changing the motor speed rather than, as with high speed CDRW/DVDRW/BDRW drives or hard drives, holding the motor speed steady and varying the data rate because the floppy controller chips can generally only operate at one or two fixed rates, with their clock input being - at least in older machines - either locked to the system clock or a separate crystal that's the fastest one in the entire machine, so can't be finely subdivided to a variety of slightly-different rates (it's either 1:1, or 1/2...), nor multiply up from a lower frequency using a PLL. Optical drives and hard drives take their clock from the pre-hard-formatted media itself, but soft-formatted floppies have to use a reference within the computer itself)



      The actual recorded area of a floppy disk is quite obvious - it's a little narrower than the "window" of a 5.25 or 8 inch envelope, or that opens up on a 3.5 (or 3.0, 2.5...) inch type. Its boundaries are a good way in both from the physical edge of the disc proper, and the hub, and their radii don't vary a huge amount in comparison to that of the middle track. If we go by Apple's example, it can be assumed that 3.5 inch disks exhibit about a 1.5:1 variation between the innermost and outermost tracks, and perhaps even less (about 1.4:1) on 5.25 inchers. The notional amount of wastage with CAV recording is quite low - if you push the limits, you might get 20 to 25% extra, but realistically no more than 15%, which wasn't worth the bother (and considerable extra cost) to most manufacturers who didn't have a hardware design savant sitting in one of the founders' seats like Apple (even Commodore, who had their own IC fabs and other first-party hardware factories didn't much bother with the idea).



      On a typical optical disc, however, almost the entire visible surface area is available for data storage, from a few millimetres out from the transparent hub through to a millimetre or so from the outer edge. The speed of an envelope-pushing 81 minute music CD varies by almost a full 2.5x as it plays the audio data out at a steady speed with no meaningful buffering, implying an outer radius nearly 2.5x that of the innermost, and DVDRs written within safe limits (avoiding the last millimetre or so where error rates skyrocket) show a 2.4x speed variation when running in variable data-rate CAV mode. Therefore if you were to operate in pure CAV, fixed sector count per revolution mode with those, you would lose a significant amount of the total capacity, easily a third or more, which would mean the difference between storing 75 minutes, or just 50 minutes. This loss can actually be seen with CAV Laserdiscs (tuned for steady freezeframe, or storing thousands of still photos rather than the maximum amount of analogue video... which is a little strange because their analogue nature would allow cramming in more sectors to the inner tracks at the expense of some horizontal resolution) which have a similar inner/outer radius ratio to CDs and show a noticeably lower runtime, and in the early "PacketCD" standards for floppy-like addressing of CDRWs (with fixed numbers of sectors per revolution, Z-CAV speed control, and a larger than normal gap between sectors, all compensating for the difficulty of accurately rewriting individual sectors in a continual-spiral disc format never intended for anything other than one-time writing of single large sessions consisting of many thousands of sectors) which saw the recordable capacity of a 650+ MB CDRW fall to barely more than 500MB, and a 700MB disc to about 530MB.



      The latter examples are also an answer to why we don't use continuous spirals for floppies either; it's just too complicated, from an engineering perspective. The finesse of control exerted in CD transports in terms of head positioning was simply exquisite by the terms of the early 80s and easily counted for as much of the thousand-plus dollar selling price of the first players as the actual laser diodes, the high-data-rate decoder circuitry and ultra high fidelity analogue output stage. A compact disc spins at about the same average speed as a floppy disc (whilst delivering 10x the data rate of even the fastest floppies, and more like 80x that of a turn-of-the-80s model), but moves the equivalent of one track width per rotation (as the read-out is nonstop, unlike most floppies, which usually need at least 3 revolutions to read a full, single-sided track of data and potentially as many as 21)... and can carry on doing that for anything up to 80 minutes (whereas most floppies can be fully read in under 5 minutes, sometimes less than 1 minute). It might average about 375rpm over those 80 minutes, so the head needs to be able to seek between at least 30,000 individual, microscopic tracks (across a start-to-end width of maybe 2 inches max), and that's if we assume the laser head's groove-following abilities have enough swing to cover half a track width one way or the other instead of the head sled having to step 12.5 or so half-tracks per second, or even 25 quarter-tracks. A floppy drive, as stated, only needing the ability to step somewhat coarsely between 35 to 84 tracks over a slightly narrower sweep, which is a large enough distance that the mechanism can be clearly seen moving from one track to the next.



      And, of course, to maintain continual tracking (but still with random-access abilities), the RW head mechanism would either have to be stepped whole rather more finely (say, a tiny tick between each sector), or be equipped with a similar track-following servo mechanism that electromagnetically (problematic for magnetic media...) swings the actual coils side to side within the frame...



      Considering how much even mundane floppy drives cost during the era when that sort of advance would actually have been useful, the necessary engineering upgrades to enable it would have been absolutely prohibitive. Maybe the sort of thing IBM would have indulged in for the drives attached to their mainframes, but unlikely to be withstood even by minicomputer builders like DEC, let alone microcomputer firms.



      However, there is one place that variable sector counts on magnetic media are commonly found, and have been for about the last thirty years (though it still didn't become common until well into the CDROM and Mac Floppy era): Hard drives. There's a reason that, before the rise of SSDs, there was benefit to defragmenter utilities that moved all your system files and most frequently loaded programs to the "start" of the disk: the lower numbered sectors sit at the outer edge of each platter (the outermost "cylinders" - a set of still-concentric tracks shared between platters, as all the heads move in lockstep with each other), which have more sectors per revolution than the inner cylinders (and "later" sectors)... therefore delivering a considerably higher data rate (the difference between inner and outer track radius being at least as much as on a CD) and reducing both the need for track seeking and the distance to be sought (as it takes fewer tracks to store the same amount of data). Very early drives used a fixed number of sectors per cylinder, and can often be identified by the use of pre-emphasis zones (or alternatively zones of "reduced write current", which are simply the logical inverse)... that is, a cylinder number denoting where the write signal had to be amplified in order to successfully write the same amount of data to the denser areas towards the inner part of the platters. Before too long, however, the logical sectors and tracks became divorced from the physical ones, as manufacturers took advantage of the varying writeable density of each track to both simplify the electronics (maintaining a steady write current throughout) and greatly increase the total capacity without affecting reliability or having to improve either the mechanical components of the drive or the magnetic material coating the platters. Their existing inherent greater density and speed (partly from multiple platters, wider head sweep and faster motor speed, plus higher grade control electronics, but also rigid discs with higher quality magnetic coatings and non-contact "flying" heads, all of which allowed closer-set tracks each with more sectors than a typical floppy) aided this as the sector count can be varied more finely if the midpoint of a ~2.5x range has 40 sectors vs a ~1.5x range with 10 sectors at the midpoint, and the more sophisticated controllers and tighter controlled rotational speed (synchronous direct-drive motors with near zero friction, vs factory-calibrated but otherwise unregulated, often belt-drive spindles with considerable friction from both the heads and the material of the disk envelope itself) are fertile ground for a wide ranging sector count across dozens of zones of a few tracks each with the minimum necessary slop.



      And, ultimately, it's that type of tech that ended up being incorporated into the "super floppies" we did eventually get, first in the form of the Bernoulli and Syquest style hard-disc and magneto-optical disc cartridges, and then the rather truer Zip100 and LS120 "floppies", bridging the gap between regular 1.4MB DSHDs, the still extremely expensive and not at all hot swappable true hard drives, and the yet-to-mature CDRW technology.



      Funnily enough, though, a few manufacturers did make continuous-spiral floppy discs... but these were all quite crude, limited capacity affairs used in niche, low cost applications, such as electronically controlled sewing machines, or "smart" word-processor-ish electric typewriters. They were essentially little better than flattened out, somewhat faster audio cassettes, as they were read or written in a single rapid swipe (there was a possibility of holding multiple files, but they would all have to be read into the machine's memory, then re-written with only the active file actually changing, so usually it was more useful to save one file on each disk), and held a few dozen kilobytes at best (again, only really useful for one file or a few small ones), though they were at least quite small (several fitting in the same volume as an audio tape; this meant a non-standard size, however), robust (moreso than a tape), and only took 20~30 seconds to read or write vs the several minutes the same data might take from a tape deck. They were a way to make a floppy drive as simply and cheaply as possible, rather than as high capacity as possible whilst still being reliable, and the head position was geared directly to the hub spindle and motor. One turn of the spindle meant the equivalent of a single head step in a conventional drive (with fewer tracks, lower rpm, and less data per revolution), and random seeking was impossible; returning to the start meant (automatically) "rewinding" the drive, and the hub had to be keyed in the same way as a 3.5 inch drive, but in that case to preserve the head position vs rotation angle relationship instead of being a way to generate a sync pulse off the motor (vs the optically-read sync hole punched into 5.25 media). Crap as it was, that was about the only practical example of true variable data density on floppy disk media (there weren't even any real "sectors", let alone a whole or fixed number per "track"), and certainly the only attempt at continuous-track recording.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered Jan 24 at 23:56









      tahreytahrey

      31916




      31916








      • 1





        Mind you, it's still a damn good question, and along the lines of the kind of pie in the sky thought experiment I like to do myself... what if some particular innovation, within the reach of an older tech level but still requiring active invention, had occurred to an inventor several years before it did in this timeline? Or even just a different design philosophy (or, heck, a less poisonous and more dev-friendly management culture) had prevailed within a certain company, thus the obvious potential upgrades glaring by their absence from the historical record had actualy happened?

        – tahrey
        Jan 25 at 0:27














      • 1





        Mind you, it's still a damn good question, and along the lines of the kind of pie in the sky thought experiment I like to do myself... what if some particular innovation, within the reach of an older tech level but still requiring active invention, had occurred to an inventor several years before it did in this timeline? Or even just a different design philosophy (or, heck, a less poisonous and more dev-friendly management culture) had prevailed within a certain company, thus the obvious potential upgrades glaring by their absence from the historical record had actualy happened?

        – tahrey
        Jan 25 at 0:27








      1




      1





      Mind you, it's still a damn good question, and along the lines of the kind of pie in the sky thought experiment I like to do myself... what if some particular innovation, within the reach of an older tech level but still requiring active invention, had occurred to an inventor several years before it did in this timeline? Or even just a different design philosophy (or, heck, a less poisonous and more dev-friendly management culture) had prevailed within a certain company, thus the obvious potential upgrades glaring by their absence from the historical record had actualy happened?

      – tahrey
      Jan 25 at 0:27





      Mind you, it's still a damn good question, and along the lines of the kind of pie in the sky thought experiment I like to do myself... what if some particular innovation, within the reach of an older tech level but still requiring active invention, had occurred to an inventor several years before it did in this timeline? Or even just a different design philosophy (or, heck, a less poisonous and more dev-friendly management culture) had prevailed within a certain company, thus the obvious potential upgrades glaring by their absence from the historical record had actualy happened?

      – tahrey
      Jan 25 at 0:27











      30















      But as the linked article indicates, while this is used on optical disks, it has generally not been used on floppy disks.




      It has. Apple's famous Twiggy drive was one attempt to do so. It featured 6 different zones with 15 to 21 sectors per track. As a result some 120 additional sectors per side (or 100KiB per disk) could be used. To keep the data rate constant rotation varied between 394 (outer tracks) and 590 (inner tracks) RPM.



      With the switch to 3.5 inch drives on the Lisa and later Mac this idea was the reason for Apple to develop their own format (*1) using a Sony drive. In fact, controller and parameters kept mostly the same (394-590 RPM), except now it was 8-12 sectors due the shorter track length of a 3.5 inch drive. Set off by using 80 instead of 46 tracks, so a single-sided 3.5 inch drive did hold 400 Kib, while a dual sided Twiggy had ~850 KiB.



      And then there is one really widespread use of the basic idea, even predating the Twiggy: The Commodore drives (*2) starting with the PETs 2031, but most notably the 1541, all 170 KiB drives used to write different zones of 17 to 21 sectors per track. But instead of running the disk at different speeds the data rate was varied to the same result. This was possible as the controller was not only specified to work the extended range, but also was what today might be called a Software-Defined-FDC, as all data handling was done by a separate 6502 in software.



      Other companies/developers toyed with the same idea as Apple or Commodore, for example the Sirius(*3) did so as well, increasing standard 500 KiB (unformatted) capacity per side to 600 KiB. And instead of asking to buy 'special' media, to make it happen, standard diskettes could be used. Then again it's not Chuck Peddle's brain child - the same man who was behind the PET development :))




      Why not? They would certainly have gained great benefit from more capacity.




      The benefit of some 10-15% increased capacity (*4) got several drawbacks:





      • Track switch speed (as mentioned) will increase whenever a zone border is crossed and the motor speed needs to be changed. While not as much as spinning up from stand, it may take several turns to stabilize. After all, we got real motors and real masses to accelerate or decelerate here.




        • This got offset a with the introduction of direct drive and smaller disk sizes, but not much.

        • To counter the cost, not every track was handled different, but zones where used (*5).



      • Increased cost for motor control on the drive side


      • Increased cost for motor control on the controller side

      • Introduction of a non standard interface, as the existing interface had no motor control lines beside on and off


      Especially the latter was an even greater turn-off than increased cost for manufacturers. Offering more size is only a minor advertisement feat, increased cost may be handed to that (*6), but non-standard interface means hard migration into uncharted territory - even more so being tied to this non-standard manufacturer (of the drives). Nothing CEOs like.



      The 3.5 inch drive itself is a major example here, as it only took off after stripping everything 'new' but the size and making it compatible with the existing 5.25 drives. Even Apple dropped at one point their scheme and changed for standard drives - much like Commodore did twice when offering the 1570/71 drives capable of reading and writing standard compatible 5.25 disks.



      The alternative to modify the data rate would have offset some of the drawbacks, but required new controller chips and different analogue setup - also more delicate, as data rate and head gap are related.




      Intuitively I would rather the drive spend an extra tenth of a second seeking, than require me to spend an extra ten seconds manually swapping disks because one disk didn't have enough capacity.




      Sure thing - except, with a gain of only ~15% in size (*4 again), the saving isn't much and you'll swap soon again. With diskette sizes it's much like with CPU speed. Every thing less than doubling is hard to notice.





      *1 - Keep in mind, this was before the standardisation of 3.5" drives.



      *2 - Thanks to Felix Palmen for reminding.



      *3 - Wilson digged out the corresponding patent.



      *4 - Just pull out your geometry books and calculate the difference in diameter of a circle with 1.354 inches and 2.25 inches (for 5.25 inch drives) or 0.9719 inches to 1.5551 inches (for 3.5 inch drives) - or be lazy and just divide them to get a factor telling the relative length increase.



      Also keep in mind that only whole blocks will work, so only if it's increased by at least ~1/16th for 5.25 or 1/8th for 3.4 a new block can be added to a track.



      *5 - Which as well makes sense for blocked structures with fixed block length.



      *6 - And soon offset new cheap(er) integrated solutions.






      share|improve this answer





















      • 3





        @rwallace As you try to go closer to the center, you have a shorter track length so fewer sectors per track, and you need faster motor speeds to keep the same data transfer rate. The faster speeds will put more stress on the whole disk area through friction between the disk and the casing and cause more "wobble" through out of balance disks. CDs can use a larger range since they are rigid and have no mechanical contact with the disk for reading and writing,

        – alephzero
        Jan 23 at 14:08






      • 2





        @rwallace Keep in mind that a head and a head mount still needs space inside the slot. Leaving off 1.5 cm seams reasonable.

        – Raffzahn
        Jan 23 at 14:23






      • 2





        You forgot the physical size of the read/write head, which was huge compared with a hard disk head because it has to physically hold the disk materal flat. See kids.kiddle.co/Floppy_disk (near the bottom) for pictures.

        – alephzero
        Jan 23 at 14:33






      • 1





        Here's a fun story about the Twiggy drives used in the early Apple machines, and the sometimes hilarious politics within the company... folklore.org/StoryView.py?story=Hide_Under_This_Desk.txt

        – KlaymenDK
        Jan 23 at 14:56






      • 2





        Re: the Commodore drives, you're correct. See e.g. this C1541 memory map: sta.c64.org/cbm1541mem.html — check out address 1C00 and bits 5-6 which set data density. You can also check out that drive's ROM disassembly at ffd2.com/fridge/docs/1541dis.html looking at the routine from F33C, especially from "get number of sectors per track" at F348 down to actually setting the density at F35C. Density is entirely programmatic, so a fast loader or other similar thing could try to over-juice some of the shorter tracks if it wanted.

        – Tommy
        Jan 23 at 16:24


















      30















      But as the linked article indicates, while this is used on optical disks, it has generally not been used on floppy disks.




      It has. Apple's famous Twiggy drive was one attempt to do so. It featured 6 different zones with 15 to 21 sectors per track. As a result some 120 additional sectors per side (or 100KiB per disk) could be used. To keep the data rate constant rotation varied between 394 (outer tracks) and 590 (inner tracks) RPM.



      With the switch to 3.5 inch drives on the Lisa and later Mac this idea was the reason for Apple to develop their own format (*1) using a Sony drive. In fact, controller and parameters kept mostly the same (394-590 RPM), except now it was 8-12 sectors due the shorter track length of a 3.5 inch drive. Set off by using 80 instead of 46 tracks, so a single-sided 3.5 inch drive did hold 400 Kib, while a dual sided Twiggy had ~850 KiB.



      And then there is one really widespread use of the basic idea, even predating the Twiggy: The Commodore drives (*2) starting with the PETs 2031, but most notably the 1541, all 170 KiB drives used to write different zones of 17 to 21 sectors per track. But instead of running the disk at different speeds the data rate was varied to the same result. This was possible as the controller was not only specified to work the extended range, but also was what today might be called a Software-Defined-FDC, as all data handling was done by a separate 6502 in software.



      Other companies/developers toyed with the same idea as Apple or Commodore, for example the Sirius(*3) did so as well, increasing standard 500 KiB (unformatted) capacity per side to 600 KiB. And instead of asking to buy 'special' media, to make it happen, standard diskettes could be used. Then again it's not Chuck Peddle's brain child - the same man who was behind the PET development :))




      Why not? They would certainly have gained great benefit from more capacity.




      The benefit of some 10-15% increased capacity (*4) got several drawbacks:





      • Track switch speed (as mentioned) will increase whenever a zone border is crossed and the motor speed needs to be changed. While not as much as spinning up from stand, it may take several turns to stabilize. After all, we got real motors and real masses to accelerate or decelerate here.




        • This got offset a with the introduction of direct drive and smaller disk sizes, but not much.

        • To counter the cost, not every track was handled different, but zones where used (*5).



      • Increased cost for motor control on the drive side


      • Increased cost for motor control on the controller side

      • Introduction of a non standard interface, as the existing interface had no motor control lines beside on and off


      Especially the latter was an even greater turn-off than increased cost for manufacturers. Offering more size is only a minor advertisement feat, increased cost may be handed to that (*6), but non-standard interface means hard migration into uncharted territory - even more so being tied to this non-standard manufacturer (of the drives). Nothing CEOs like.



      The 3.5 inch drive itself is a major example here, as it only took off after stripping everything 'new' but the size and making it compatible with the existing 5.25 drives. Even Apple dropped at one point their scheme and changed for standard drives - much like Commodore did twice when offering the 1570/71 drives capable of reading and writing standard compatible 5.25 disks.



      The alternative to modify the data rate would have offset some of the drawbacks, but required new controller chips and different analogue setup - also more delicate, as data rate and head gap are related.




      Intuitively I would rather the drive spend an extra tenth of a second seeking, than require me to spend an extra ten seconds manually swapping disks because one disk didn't have enough capacity.




      Sure thing - except, with a gain of only ~15% in size (*4 again), the saving isn't much and you'll swap soon again. With diskette sizes it's much like with CPU speed. Every thing less than doubling is hard to notice.





      *1 - Keep in mind, this was before the standardisation of 3.5" drives.



      *2 - Thanks to Felix Palmen for reminding.



      *3 - Wilson digged out the corresponding patent.



      *4 - Just pull out your geometry books and calculate the difference in diameter of a circle with 1.354 inches and 2.25 inches (for 5.25 inch drives) or 0.9719 inches to 1.5551 inches (for 3.5 inch drives) - or be lazy and just divide them to get a factor telling the relative length increase.



      Also keep in mind that only whole blocks will work, so only if it's increased by at least ~1/16th for 5.25 or 1/8th for 3.4 a new block can be added to a track.



      *5 - Which as well makes sense for blocked structures with fixed block length.



      *6 - And soon offset new cheap(er) integrated solutions.






      share|improve this answer





















      • 3





        @rwallace As you try to go closer to the center, you have a shorter track length so fewer sectors per track, and you need faster motor speeds to keep the same data transfer rate. The faster speeds will put more stress on the whole disk area through friction between the disk and the casing and cause more "wobble" through out of balance disks. CDs can use a larger range since they are rigid and have no mechanical contact with the disk for reading and writing,

        – alephzero
        Jan 23 at 14:08






      • 2





        @rwallace Keep in mind that a head and a head mount still needs space inside the slot. Leaving off 1.5 cm seams reasonable.

        – Raffzahn
        Jan 23 at 14:23






      • 2





        You forgot the physical size of the read/write head, which was huge compared with a hard disk head because it has to physically hold the disk materal flat. See kids.kiddle.co/Floppy_disk (near the bottom) for pictures.

        – alephzero
        Jan 23 at 14:33






      • 1





        Here's a fun story about the Twiggy drives used in the early Apple machines, and the sometimes hilarious politics within the company... folklore.org/StoryView.py?story=Hide_Under_This_Desk.txt

        – KlaymenDK
        Jan 23 at 14:56






      • 2





        Re: the Commodore drives, you're correct. See e.g. this C1541 memory map: sta.c64.org/cbm1541mem.html — check out address 1C00 and bits 5-6 which set data density. You can also check out that drive's ROM disassembly at ffd2.com/fridge/docs/1541dis.html looking at the routine from F33C, especially from "get number of sectors per track" at F348 down to actually setting the density at F35C. Density is entirely programmatic, so a fast loader or other similar thing could try to over-juice some of the shorter tracks if it wanted.

        – Tommy
        Jan 23 at 16:24
















      30












      30








      30








      But as the linked article indicates, while this is used on optical disks, it has generally not been used on floppy disks.




      It has. Apple's famous Twiggy drive was one attempt to do so. It featured 6 different zones with 15 to 21 sectors per track. As a result some 120 additional sectors per side (or 100KiB per disk) could be used. To keep the data rate constant rotation varied between 394 (outer tracks) and 590 (inner tracks) RPM.



      With the switch to 3.5 inch drives on the Lisa and later Mac this idea was the reason for Apple to develop their own format (*1) using a Sony drive. In fact, controller and parameters kept mostly the same (394-590 RPM), except now it was 8-12 sectors due the shorter track length of a 3.5 inch drive. Set off by using 80 instead of 46 tracks, so a single-sided 3.5 inch drive did hold 400 Kib, while a dual sided Twiggy had ~850 KiB.



      And then there is one really widespread use of the basic idea, even predating the Twiggy: The Commodore drives (*2) starting with the PETs 2031, but most notably the 1541, all 170 KiB drives used to write different zones of 17 to 21 sectors per track. But instead of running the disk at different speeds the data rate was varied to the same result. This was possible as the controller was not only specified to work the extended range, but also was what today might be called a Software-Defined-FDC, as all data handling was done by a separate 6502 in software.



      Other companies/developers toyed with the same idea as Apple or Commodore, for example the Sirius(*3) did so as well, increasing standard 500 KiB (unformatted) capacity per side to 600 KiB. And instead of asking to buy 'special' media, to make it happen, standard diskettes could be used. Then again it's not Chuck Peddle's brain child - the same man who was behind the PET development :))




      Why not? They would certainly have gained great benefit from more capacity.




      The benefit of some 10-15% increased capacity (*4) got several drawbacks:





      • Track switch speed (as mentioned) will increase whenever a zone border is crossed and the motor speed needs to be changed. While not as much as spinning up from stand, it may take several turns to stabilize. After all, we got real motors and real masses to accelerate or decelerate here.




        • This got offset a with the introduction of direct drive and smaller disk sizes, but not much.

        • To counter the cost, not every track was handled different, but zones where used (*5).



      • Increased cost for motor control on the drive side


      • Increased cost for motor control on the controller side

      • Introduction of a non standard interface, as the existing interface had no motor control lines beside on and off


      Especially the latter was an even greater turn-off than increased cost for manufacturers. Offering more size is only a minor advertisement feat, increased cost may be handed to that (*6), but non-standard interface means hard migration into uncharted territory - even more so being tied to this non-standard manufacturer (of the drives). Nothing CEOs like.



      The 3.5 inch drive itself is a major example here, as it only took off after stripping everything 'new' but the size and making it compatible with the existing 5.25 drives. Even Apple dropped at one point their scheme and changed for standard drives - much like Commodore did twice when offering the 1570/71 drives capable of reading and writing standard compatible 5.25 disks.



      The alternative to modify the data rate would have offset some of the drawbacks, but required new controller chips and different analogue setup - also more delicate, as data rate and head gap are related.




      Intuitively I would rather the drive spend an extra tenth of a second seeking, than require me to spend an extra ten seconds manually swapping disks because one disk didn't have enough capacity.




      Sure thing - except, with a gain of only ~15% in size (*4 again), the saving isn't much and you'll swap soon again. With diskette sizes it's much like with CPU speed. Every thing less than doubling is hard to notice.





      *1 - Keep in mind, this was before the standardisation of 3.5" drives.



      *2 - Thanks to Felix Palmen for reminding.



      *3 - Wilson digged out the corresponding patent.



      *4 - Just pull out your geometry books and calculate the difference in diameter of a circle with 1.354 inches and 2.25 inches (for 5.25 inch drives) or 0.9719 inches to 1.5551 inches (for 3.5 inch drives) - or be lazy and just divide them to get a factor telling the relative length increase.



      Also keep in mind that only whole blocks will work, so only if it's increased by at least ~1/16th for 5.25 or 1/8th for 3.4 a new block can be added to a track.



      *5 - Which as well makes sense for blocked structures with fixed block length.



      *6 - And soon offset new cheap(er) integrated solutions.






      share|improve this answer
















      But as the linked article indicates, while this is used on optical disks, it has generally not been used on floppy disks.




      It has. Apple's famous Twiggy drive was one attempt to do so. It featured 6 different zones with 15 to 21 sectors per track. As a result some 120 additional sectors per side (or 100KiB per disk) could be used. To keep the data rate constant rotation varied between 394 (outer tracks) and 590 (inner tracks) RPM.



      With the switch to 3.5 inch drives on the Lisa and later Mac this idea was the reason for Apple to develop their own format (*1) using a Sony drive. In fact, controller and parameters kept mostly the same (394-590 RPM), except now it was 8-12 sectors due the shorter track length of a 3.5 inch drive. Set off by using 80 instead of 46 tracks, so a single-sided 3.5 inch drive did hold 400 Kib, while a dual sided Twiggy had ~850 KiB.



      And then there is one really widespread use of the basic idea, even predating the Twiggy: The Commodore drives (*2) starting with the PETs 2031, but most notably the 1541, all 170 KiB drives used to write different zones of 17 to 21 sectors per track. But instead of running the disk at different speeds the data rate was varied to the same result. This was possible as the controller was not only specified to work the extended range, but also was what today might be called a Software-Defined-FDC, as all data handling was done by a separate 6502 in software.



      Other companies/developers toyed with the same idea as Apple or Commodore, for example the Sirius(*3) did so as well, increasing standard 500 KiB (unformatted) capacity per side to 600 KiB. And instead of asking to buy 'special' media, to make it happen, standard diskettes could be used. Then again it's not Chuck Peddle's brain child - the same man who was behind the PET development :))




      Why not? They would certainly have gained great benefit from more capacity.




      The benefit of some 10-15% increased capacity (*4) got several drawbacks:





      • Track switch speed (as mentioned) will increase whenever a zone border is crossed and the motor speed needs to be changed. While not as much as spinning up from stand, it may take several turns to stabilize. After all, we got real motors and real masses to accelerate or decelerate here.




        • This got offset a with the introduction of direct drive and smaller disk sizes, but not much.

        • To counter the cost, not every track was handled different, but zones where used (*5).



      • Increased cost for motor control on the drive side


      • Increased cost for motor control on the controller side

      • Introduction of a non standard interface, as the existing interface had no motor control lines beside on and off


      Especially the latter was an even greater turn-off than increased cost for manufacturers. Offering more size is only a minor advertisement feat, increased cost may be handed to that (*6), but non-standard interface means hard migration into uncharted territory - even more so being tied to this non-standard manufacturer (of the drives). Nothing CEOs like.



      The 3.5 inch drive itself is a major example here, as it only took off after stripping everything 'new' but the size and making it compatible with the existing 5.25 drives. Even Apple dropped at one point their scheme and changed for standard drives - much like Commodore did twice when offering the 1570/71 drives capable of reading and writing standard compatible 5.25 disks.



      The alternative to modify the data rate would have offset some of the drawbacks, but required new controller chips and different analogue setup - also more delicate, as data rate and head gap are related.




      Intuitively I would rather the drive spend an extra tenth of a second seeking, than require me to spend an extra ten seconds manually swapping disks because one disk didn't have enough capacity.




      Sure thing - except, with a gain of only ~15% in size (*4 again), the saving isn't much and you'll swap soon again. With diskette sizes it's much like with CPU speed. Every thing less than doubling is hard to notice.





      *1 - Keep in mind, this was before the standardisation of 3.5" drives.



      *2 - Thanks to Felix Palmen for reminding.



      *3 - Wilson digged out the corresponding patent.



      *4 - Just pull out your geometry books and calculate the difference in diameter of a circle with 1.354 inches and 2.25 inches (for 5.25 inch drives) or 0.9719 inches to 1.5551 inches (for 3.5 inch drives) - or be lazy and just divide them to get a factor telling the relative length increase.



      Also keep in mind that only whole blocks will work, so only if it's increased by at least ~1/16th for 5.25 or 1/8th for 3.4 a new block can be added to a track.



      *5 - Which as well makes sense for blocked structures with fixed block length.



      *6 - And soon offset new cheap(er) integrated solutions.







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited Jan 23 at 18:59

























      answered Jan 23 at 12:45









      RaffzahnRaffzahn

      52.8k6125213




      52.8k6125213








      • 3





        @rwallace As you try to go closer to the center, you have a shorter track length so fewer sectors per track, and you need faster motor speeds to keep the same data transfer rate. The faster speeds will put more stress on the whole disk area through friction between the disk and the casing and cause more "wobble" through out of balance disks. CDs can use a larger range since they are rigid and have no mechanical contact with the disk for reading and writing,

        – alephzero
        Jan 23 at 14:08






      • 2





        @rwallace Keep in mind that a head and a head mount still needs space inside the slot. Leaving off 1.5 cm seams reasonable.

        – Raffzahn
        Jan 23 at 14:23






      • 2





        You forgot the physical size of the read/write head, which was huge compared with a hard disk head because it has to physically hold the disk materal flat. See kids.kiddle.co/Floppy_disk (near the bottom) for pictures.

        – alephzero
        Jan 23 at 14:33






      • 1





        Here's a fun story about the Twiggy drives used in the early Apple machines, and the sometimes hilarious politics within the company... folklore.org/StoryView.py?story=Hide_Under_This_Desk.txt

        – KlaymenDK
        Jan 23 at 14:56






      • 2





        Re: the Commodore drives, you're correct. See e.g. this C1541 memory map: sta.c64.org/cbm1541mem.html — check out address 1C00 and bits 5-6 which set data density. You can also check out that drive's ROM disassembly at ffd2.com/fridge/docs/1541dis.html looking at the routine from F33C, especially from "get number of sectors per track" at F348 down to actually setting the density at F35C. Density is entirely programmatic, so a fast loader or other similar thing could try to over-juice some of the shorter tracks if it wanted.

        – Tommy
        Jan 23 at 16:24
















      • 3





        @rwallace As you try to go closer to the center, you have a shorter track length so fewer sectors per track, and you need faster motor speeds to keep the same data transfer rate. The faster speeds will put more stress on the whole disk area through friction between the disk and the casing and cause more "wobble" through out of balance disks. CDs can use a larger range since they are rigid and have no mechanical contact with the disk for reading and writing,

        – alephzero
        Jan 23 at 14:08






      • 2





        @rwallace Keep in mind that a head and a head mount still needs space inside the slot. Leaving off 1.5 cm seams reasonable.

        – Raffzahn
        Jan 23 at 14:23






      • 2





        You forgot the physical size of the read/write head, which was huge compared with a hard disk head because it has to physically hold the disk materal flat. See kids.kiddle.co/Floppy_disk (near the bottom) for pictures.

        – alephzero
        Jan 23 at 14:33






      • 1





        Here's a fun story about the Twiggy drives used in the early Apple machines, and the sometimes hilarious politics within the company... folklore.org/StoryView.py?story=Hide_Under_This_Desk.txt

        – KlaymenDK
        Jan 23 at 14:56






      • 2





        Re: the Commodore drives, you're correct. See e.g. this C1541 memory map: sta.c64.org/cbm1541mem.html — check out address 1C00 and bits 5-6 which set data density. You can also check out that drive's ROM disassembly at ffd2.com/fridge/docs/1541dis.html looking at the routine from F33C, especially from "get number of sectors per track" at F348 down to actually setting the density at F35C. Density is entirely programmatic, so a fast loader or other similar thing could try to over-juice some of the shorter tracks if it wanted.

        – Tommy
        Jan 23 at 16:24










      3




      3





      @rwallace As you try to go closer to the center, you have a shorter track length so fewer sectors per track, and you need faster motor speeds to keep the same data transfer rate. The faster speeds will put more stress on the whole disk area through friction between the disk and the casing and cause more "wobble" through out of balance disks. CDs can use a larger range since they are rigid and have no mechanical contact with the disk for reading and writing,

      – alephzero
      Jan 23 at 14:08





      @rwallace As you try to go closer to the center, you have a shorter track length so fewer sectors per track, and you need faster motor speeds to keep the same data transfer rate. The faster speeds will put more stress on the whole disk area through friction between the disk and the casing and cause more "wobble" through out of balance disks. CDs can use a larger range since they are rigid and have no mechanical contact with the disk for reading and writing,

      – alephzero
      Jan 23 at 14:08




      2




      2





      @rwallace Keep in mind that a head and a head mount still needs space inside the slot. Leaving off 1.5 cm seams reasonable.

      – Raffzahn
      Jan 23 at 14:23





      @rwallace Keep in mind that a head and a head mount still needs space inside the slot. Leaving off 1.5 cm seams reasonable.

      – Raffzahn
      Jan 23 at 14:23




      2




      2





      You forgot the physical size of the read/write head, which was huge compared with a hard disk head because it has to physically hold the disk materal flat. See kids.kiddle.co/Floppy_disk (near the bottom) for pictures.

      – alephzero
      Jan 23 at 14:33





      You forgot the physical size of the read/write head, which was huge compared with a hard disk head because it has to physically hold the disk materal flat. See kids.kiddle.co/Floppy_disk (near the bottom) for pictures.

      – alephzero
      Jan 23 at 14:33




      1




      1





      Here's a fun story about the Twiggy drives used in the early Apple machines, and the sometimes hilarious politics within the company... folklore.org/StoryView.py?story=Hide_Under_This_Desk.txt

      – KlaymenDK
      Jan 23 at 14:56





      Here's a fun story about the Twiggy drives used in the early Apple machines, and the sometimes hilarious politics within the company... folklore.org/StoryView.py?story=Hide_Under_This_Desk.txt

      – KlaymenDK
      Jan 23 at 14:56




      2




      2





      Re: the Commodore drives, you're correct. See e.g. this C1541 memory map: sta.c64.org/cbm1541mem.html — check out address 1C00 and bits 5-6 which set data density. You can also check out that drive's ROM disassembly at ffd2.com/fridge/docs/1541dis.html looking at the routine from F33C, especially from "get number of sectors per track" at F348 down to actually setting the density at F35C. Density is entirely programmatic, so a fast loader or other similar thing could try to over-juice some of the shorter tracks if it wanted.

      – Tommy
      Jan 23 at 16:24







      Re: the Commodore drives, you're correct. See e.g. this C1541 memory map: sta.c64.org/cbm1541mem.html — check out address 1C00 and bits 5-6 which set data density. You can also check out that drive's ROM disassembly at ffd2.com/fridge/docs/1541dis.html looking at the routine from F33C, especially from "get number of sectors per track" at F348 down to actually setting the density at F35C. Density is entirely programmatic, so a fast loader or other similar thing could try to over-juice some of the shorter tracks if it wanted.

      – Tommy
      Jan 23 at 16:24













      6














      During the hay-day of floppy drives the technology needed to do CLV was rather expensive. Writing/reading data at a fixed rate and running a motor at a fixes speed is the cheapest option. Variable speed oscillators were uncommon and not generally available as single integrated circuits.



      At the time cost tended to be the most important factor, as computers were still very expensive and many people were still using tapes for storage, a floppy drive was relatively fast and spacious.



      The gains were also somewhat marginal. Consider that Apple's expensive "Twiggy" drives. They proved unreliable and could only store 871k of data on a 5.25" disk. Sony had already released its 3.5" format two years earlier, and with double density disks computers were able to store 880k on them. They quickly became popular and cheap.



      I'm not convinced about Wikipedia's claim that speed changes would have had a big influence on seek times. Floppy disks have a lot more friction and run at much lower speeds than optical media where these speed changes do reduce seek performance.






      share|improve this answer





















      • 4





        Spinning up and down to speed and waiting for the drive to settle did take longer than running the drive motor at a fixed speed. Maybe it's the age of my drive but on my Apple IIgs there's a marked delay between the drive speed changing and the track stepper moving.

        – scruss
        Jan 23 at 15:46






      • 1





        Sure, but not as much as with a DVD or similar optical disc. The speed change is slow enough that you can hear it ramping up. Floppy disks rotate at a much lower speed and go from zero to operating speed in a few hundred milliseconds at most.

        – user
        Jan 23 at 17:02






      • 1





        Sony's 400k and 800k floppy drives for Apple Macintosh were zoned (had multiple speeds).

        – Whit3rd
        Jan 24 at 8:07






      • 1





        I counter-contest the idea that the spindle speed change is a major factor in optical drive seek times. Head seeking is quite slow, sometimes slower than that of a floppy drive, and it may take a few revolutions to actually sync to the right radial position by comparing the desired sector number to what's passing under the head and fine-tuning back and forth. In contrast, high speed modes (>24x) are all CAV i.e. CONSTANT motor speed, slower CLV ones don't change by more than 2.5x from inner (high rpm) to outermost (low), and data tranfer proper is clocked by the hard-formatted groove wobble...

        – tahrey
        Jan 25 at 0:19






      • 1





        (Thus your optical drive doesn't actually need to spin down all the way before it can start reading from its new further-out head position, and doesn't need to spin up at all for a new further-in one; so long as the resulting raw data rate is within the limits of what the decoder hardware can interpret and buffer for output, it can start reading immediately the correct starting sector is identified, regardless of the spindle speed, and continue doing so as the motor adjusts to the set target speed. And, of course, most of the rpm change can be managed during the head seek anyway...)

        – tahrey
        Jan 25 at 0:22
















      6














      During the hay-day of floppy drives the technology needed to do CLV was rather expensive. Writing/reading data at a fixed rate and running a motor at a fixes speed is the cheapest option. Variable speed oscillators were uncommon and not generally available as single integrated circuits.



      At the time cost tended to be the most important factor, as computers were still very expensive and many people were still using tapes for storage, a floppy drive was relatively fast and spacious.



      The gains were also somewhat marginal. Consider that Apple's expensive "Twiggy" drives. They proved unreliable and could only store 871k of data on a 5.25" disk. Sony had already released its 3.5" format two years earlier, and with double density disks computers were able to store 880k on them. They quickly became popular and cheap.



      I'm not convinced about Wikipedia's claim that speed changes would have had a big influence on seek times. Floppy disks have a lot more friction and run at much lower speeds than optical media where these speed changes do reduce seek performance.






      share|improve this answer





















      • 4





        Spinning up and down to speed and waiting for the drive to settle did take longer than running the drive motor at a fixed speed. Maybe it's the age of my drive but on my Apple IIgs there's a marked delay between the drive speed changing and the track stepper moving.

        – scruss
        Jan 23 at 15:46






      • 1





        Sure, but not as much as with a DVD or similar optical disc. The speed change is slow enough that you can hear it ramping up. Floppy disks rotate at a much lower speed and go from zero to operating speed in a few hundred milliseconds at most.

        – user
        Jan 23 at 17:02






      • 1





        Sony's 400k and 800k floppy drives for Apple Macintosh were zoned (had multiple speeds).

        – Whit3rd
        Jan 24 at 8:07






      • 1





        I counter-contest the idea that the spindle speed change is a major factor in optical drive seek times. Head seeking is quite slow, sometimes slower than that of a floppy drive, and it may take a few revolutions to actually sync to the right radial position by comparing the desired sector number to what's passing under the head and fine-tuning back and forth. In contrast, high speed modes (>24x) are all CAV i.e. CONSTANT motor speed, slower CLV ones don't change by more than 2.5x from inner (high rpm) to outermost (low), and data tranfer proper is clocked by the hard-formatted groove wobble...

        – tahrey
        Jan 25 at 0:19






      • 1





        (Thus your optical drive doesn't actually need to spin down all the way before it can start reading from its new further-out head position, and doesn't need to spin up at all for a new further-in one; so long as the resulting raw data rate is within the limits of what the decoder hardware can interpret and buffer for output, it can start reading immediately the correct starting sector is identified, regardless of the spindle speed, and continue doing so as the motor adjusts to the set target speed. And, of course, most of the rpm change can be managed during the head seek anyway...)

        – tahrey
        Jan 25 at 0:22














      6












      6








      6







      During the hay-day of floppy drives the technology needed to do CLV was rather expensive. Writing/reading data at a fixed rate and running a motor at a fixes speed is the cheapest option. Variable speed oscillators were uncommon and not generally available as single integrated circuits.



      At the time cost tended to be the most important factor, as computers were still very expensive and many people were still using tapes for storage, a floppy drive was relatively fast and spacious.



      The gains were also somewhat marginal. Consider that Apple's expensive "Twiggy" drives. They proved unreliable and could only store 871k of data on a 5.25" disk. Sony had already released its 3.5" format two years earlier, and with double density disks computers were able to store 880k on them. They quickly became popular and cheap.



      I'm not convinced about Wikipedia's claim that speed changes would have had a big influence on seek times. Floppy disks have a lot more friction and run at much lower speeds than optical media where these speed changes do reduce seek performance.






      share|improve this answer















      During the hay-day of floppy drives the technology needed to do CLV was rather expensive. Writing/reading data at a fixed rate and running a motor at a fixes speed is the cheapest option. Variable speed oscillators were uncommon and not generally available as single integrated circuits.



      At the time cost tended to be the most important factor, as computers were still very expensive and many people were still using tapes for storage, a floppy drive was relatively fast and spacious.



      The gains were also somewhat marginal. Consider that Apple's expensive "Twiggy" drives. They proved unreliable and could only store 871k of data on a 5.25" disk. Sony had already released its 3.5" format two years earlier, and with double density disks computers were able to store 880k on them. They quickly became popular and cheap.



      I'm not convinced about Wikipedia's claim that speed changes would have had a big influence on seek times. Floppy disks have a lot more friction and run at much lower speeds than optical media where these speed changes do reduce seek performance.







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited Jan 23 at 15:33

























      answered Jan 23 at 15:28









      useruser

      3,360616




      3,360616








      • 4





        Spinning up and down to speed and waiting for the drive to settle did take longer than running the drive motor at a fixed speed. Maybe it's the age of my drive but on my Apple IIgs there's a marked delay between the drive speed changing and the track stepper moving.

        – scruss
        Jan 23 at 15:46






      • 1





        Sure, but not as much as with a DVD or similar optical disc. The speed change is slow enough that you can hear it ramping up. Floppy disks rotate at a much lower speed and go from zero to operating speed in a few hundred milliseconds at most.

        – user
        Jan 23 at 17:02






      • 1





        Sony's 400k and 800k floppy drives for Apple Macintosh were zoned (had multiple speeds).

        – Whit3rd
        Jan 24 at 8:07






      • 1





        I counter-contest the idea that the spindle speed change is a major factor in optical drive seek times. Head seeking is quite slow, sometimes slower than that of a floppy drive, and it may take a few revolutions to actually sync to the right radial position by comparing the desired sector number to what's passing under the head and fine-tuning back and forth. In contrast, high speed modes (>24x) are all CAV i.e. CONSTANT motor speed, slower CLV ones don't change by more than 2.5x from inner (high rpm) to outermost (low), and data tranfer proper is clocked by the hard-formatted groove wobble...

        – tahrey
        Jan 25 at 0:19






      • 1





        (Thus your optical drive doesn't actually need to spin down all the way before it can start reading from its new further-out head position, and doesn't need to spin up at all for a new further-in one; so long as the resulting raw data rate is within the limits of what the decoder hardware can interpret and buffer for output, it can start reading immediately the correct starting sector is identified, regardless of the spindle speed, and continue doing so as the motor adjusts to the set target speed. And, of course, most of the rpm change can be managed during the head seek anyway...)

        – tahrey
        Jan 25 at 0:22














      • 4





        Spinning up and down to speed and waiting for the drive to settle did take longer than running the drive motor at a fixed speed. Maybe it's the age of my drive but on my Apple IIgs there's a marked delay between the drive speed changing and the track stepper moving.

        – scruss
        Jan 23 at 15:46






      • 1





        Sure, but not as much as with a DVD or similar optical disc. The speed change is slow enough that you can hear it ramping up. Floppy disks rotate at a much lower speed and go from zero to operating speed in a few hundred milliseconds at most.

        – user
        Jan 23 at 17:02






      • 1





        Sony's 400k and 800k floppy drives for Apple Macintosh were zoned (had multiple speeds).

        – Whit3rd
        Jan 24 at 8:07






      • 1





        I counter-contest the idea that the spindle speed change is a major factor in optical drive seek times. Head seeking is quite slow, sometimes slower than that of a floppy drive, and it may take a few revolutions to actually sync to the right radial position by comparing the desired sector number to what's passing under the head and fine-tuning back and forth. In contrast, high speed modes (>24x) are all CAV i.e. CONSTANT motor speed, slower CLV ones don't change by more than 2.5x from inner (high rpm) to outermost (low), and data tranfer proper is clocked by the hard-formatted groove wobble...

        – tahrey
        Jan 25 at 0:19






      • 1





        (Thus your optical drive doesn't actually need to spin down all the way before it can start reading from its new further-out head position, and doesn't need to spin up at all for a new further-in one; so long as the resulting raw data rate is within the limits of what the decoder hardware can interpret and buffer for output, it can start reading immediately the correct starting sector is identified, regardless of the spindle speed, and continue doing so as the motor adjusts to the set target speed. And, of course, most of the rpm change can be managed during the head seek anyway...)

        – tahrey
        Jan 25 at 0:22








      4




      4





      Spinning up and down to speed and waiting for the drive to settle did take longer than running the drive motor at a fixed speed. Maybe it's the age of my drive but on my Apple IIgs there's a marked delay between the drive speed changing and the track stepper moving.

      – scruss
      Jan 23 at 15:46





      Spinning up and down to speed and waiting for the drive to settle did take longer than running the drive motor at a fixed speed. Maybe it's the age of my drive but on my Apple IIgs there's a marked delay between the drive speed changing and the track stepper moving.

      – scruss
      Jan 23 at 15:46




      1




      1





      Sure, but not as much as with a DVD or similar optical disc. The speed change is slow enough that you can hear it ramping up. Floppy disks rotate at a much lower speed and go from zero to operating speed in a few hundred milliseconds at most.

      – user
      Jan 23 at 17:02





      Sure, but not as much as with a DVD or similar optical disc. The speed change is slow enough that you can hear it ramping up. Floppy disks rotate at a much lower speed and go from zero to operating speed in a few hundred milliseconds at most.

      – user
      Jan 23 at 17:02




      1




      1





      Sony's 400k and 800k floppy drives for Apple Macintosh were zoned (had multiple speeds).

      – Whit3rd
      Jan 24 at 8:07





      Sony's 400k and 800k floppy drives for Apple Macintosh were zoned (had multiple speeds).

      – Whit3rd
      Jan 24 at 8:07




      1




      1





      I counter-contest the idea that the spindle speed change is a major factor in optical drive seek times. Head seeking is quite slow, sometimes slower than that of a floppy drive, and it may take a few revolutions to actually sync to the right radial position by comparing the desired sector number to what's passing under the head and fine-tuning back and forth. In contrast, high speed modes (>24x) are all CAV i.e. CONSTANT motor speed, slower CLV ones don't change by more than 2.5x from inner (high rpm) to outermost (low), and data tranfer proper is clocked by the hard-formatted groove wobble...

      – tahrey
      Jan 25 at 0:19





      I counter-contest the idea that the spindle speed change is a major factor in optical drive seek times. Head seeking is quite slow, sometimes slower than that of a floppy drive, and it may take a few revolutions to actually sync to the right radial position by comparing the desired sector number to what's passing under the head and fine-tuning back and forth. In contrast, high speed modes (>24x) are all CAV i.e. CONSTANT motor speed, slower CLV ones don't change by more than 2.5x from inner (high rpm) to outermost (low), and data tranfer proper is clocked by the hard-formatted groove wobble...

      – tahrey
      Jan 25 at 0:19




      1




      1





      (Thus your optical drive doesn't actually need to spin down all the way before it can start reading from its new further-out head position, and doesn't need to spin up at all for a new further-in one; so long as the resulting raw data rate is within the limits of what the decoder hardware can interpret and buffer for output, it can start reading immediately the correct starting sector is identified, regardless of the spindle speed, and continue doing so as the motor adjusts to the set target speed. And, of course, most of the rpm change can be managed during the head seek anyway...)

      – tahrey
      Jan 25 at 0:22





      (Thus your optical drive doesn't actually need to spin down all the way before it can start reading from its new further-out head position, and doesn't need to spin up at all for a new further-in one; so long as the resulting raw data rate is within the limits of what the decoder hardware can interpret and buffer for output, it can start reading immediately the correct starting sector is identified, regardless of the spindle speed, and continue doing so as the motor adjusts to the set target speed. And, of course, most of the rpm change can be managed during the head seek anyway...)

      – tahrey
      Jan 25 at 0:22











      5














      At least one computer does (something close to) what you are describing, and there is a relevant patent.



      Wikipedia claims:




      But disks made at constant bit density were not compatible with machines with standard drives.




      And this is apparently supported by a dead-tree citation. I read the sentence as meaning that the drives need special disks. So my guess is that because these drives never caught on, the disks didn't -- it was a chicken-and-egg problem that caused this implementation of the technique not to gain much market share or traction in the industry.



      And perhaps because of the patent, no-one else tried to make a CLV floppy drive as far as I can tell.






      share|improve this answer





















      • 1





        Good find, thanks! I would expect the sentence to mean you can use the same blank disks, but disks written on one kind of drive cannot be read on the other kind.

        – rwallace
        Jan 23 at 12:21






      • 1





        AFAIR Apples Twiggy drive did so before.

        – Raffzahn
        Jan 23 at 13:45






      • 1





        @Raffzahn before the Sirius 9000? So why did they award the patent?

        – Wilson
        Jan 23 at 13:57






      • 3





        @Wilson That's something you may want to ask the patent office :)) According to Andy Herzfelds story the Twiggy drives where developed in 1981. The Lisa (with twiggys) went on sale in January 1983. The Patent was filed in October 1982. It's safe to assume that the twiggies weren't developed 3 month from idea to deivery, so I guess it's eitehr invalid by prior atr, or it's claims are more subtile on vertain parts of the logic - something different from the wy Apple (or commodore) did it.

        – Raffzahn
        Jan 23 at 14:09






      • 2





        According to this (p. 1-2, § 1.3), "the Victor 9000 uses 5 1/4-inch minifloppies of a similar type to those used in other computers." It goes on to imply that the disks are physically compatible but states that they are not interchangeable with other systems due to the wildly different formatting (variable rotation speed).

        – Alex Hajnal
        Jan 23 at 23:06


















      5














      At least one computer does (something close to) what you are describing, and there is a relevant patent.



      Wikipedia claims:




      But disks made at constant bit density were not compatible with machines with standard drives.




      And this is apparently supported by a dead-tree citation. I read the sentence as meaning that the drives need special disks. So my guess is that because these drives never caught on, the disks didn't -- it was a chicken-and-egg problem that caused this implementation of the technique not to gain much market share or traction in the industry.



      And perhaps because of the patent, no-one else tried to make a CLV floppy drive as far as I can tell.






      share|improve this answer





















      • 1





        Good find, thanks! I would expect the sentence to mean you can use the same blank disks, but disks written on one kind of drive cannot be read on the other kind.

        – rwallace
        Jan 23 at 12:21






      • 1





        AFAIR Apples Twiggy drive did so before.

        – Raffzahn
        Jan 23 at 13:45






      • 1





        @Raffzahn before the Sirius 9000? So why did they award the patent?

        – Wilson
        Jan 23 at 13:57






      • 3





        @Wilson That's something you may want to ask the patent office :)) According to Andy Herzfelds story the Twiggy drives where developed in 1981. The Lisa (with twiggys) went on sale in January 1983. The Patent was filed in October 1982. It's safe to assume that the twiggies weren't developed 3 month from idea to deivery, so I guess it's eitehr invalid by prior atr, or it's claims are more subtile on vertain parts of the logic - something different from the wy Apple (or commodore) did it.

        – Raffzahn
        Jan 23 at 14:09






      • 2





        According to this (p. 1-2, § 1.3), "the Victor 9000 uses 5 1/4-inch minifloppies of a similar type to those used in other computers." It goes on to imply that the disks are physically compatible but states that they are not interchangeable with other systems due to the wildly different formatting (variable rotation speed).

        – Alex Hajnal
        Jan 23 at 23:06
















      5












      5








      5







      At least one computer does (something close to) what you are describing, and there is a relevant patent.



      Wikipedia claims:




      But disks made at constant bit density were not compatible with machines with standard drives.




      And this is apparently supported by a dead-tree citation. I read the sentence as meaning that the drives need special disks. So my guess is that because these drives never caught on, the disks didn't -- it was a chicken-and-egg problem that caused this implementation of the technique not to gain much market share or traction in the industry.



      And perhaps because of the patent, no-one else tried to make a CLV floppy drive as far as I can tell.






      share|improve this answer















      At least one computer does (something close to) what you are describing, and there is a relevant patent.



      Wikipedia claims:




      But disks made at constant bit density were not compatible with machines with standard drives.




      And this is apparently supported by a dead-tree citation. I read the sentence as meaning that the drives need special disks. So my guess is that because these drives never caught on, the disks didn't -- it was a chicken-and-egg problem that caused this implementation of the technique not to gain much market share or traction in the industry.



      And perhaps because of the patent, no-one else tried to make a CLV floppy drive as far as I can tell.







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited Jan 23 at 12:17

























      answered Jan 23 at 12:09









      WilsonWilson

      11.8k555137




      11.8k555137








      • 1





        Good find, thanks! I would expect the sentence to mean you can use the same blank disks, but disks written on one kind of drive cannot be read on the other kind.

        – rwallace
        Jan 23 at 12:21






      • 1





        AFAIR Apples Twiggy drive did so before.

        – Raffzahn
        Jan 23 at 13:45






      • 1





        @Raffzahn before the Sirius 9000? So why did they award the patent?

        – Wilson
        Jan 23 at 13:57






      • 3





        @Wilson That's something you may want to ask the patent office :)) According to Andy Herzfelds story the Twiggy drives where developed in 1981. The Lisa (with twiggys) went on sale in January 1983. The Patent was filed in October 1982. It's safe to assume that the twiggies weren't developed 3 month from idea to deivery, so I guess it's eitehr invalid by prior atr, or it's claims are more subtile on vertain parts of the logic - something different from the wy Apple (or commodore) did it.

        – Raffzahn
        Jan 23 at 14:09






      • 2





        According to this (p. 1-2, § 1.3), "the Victor 9000 uses 5 1/4-inch minifloppies of a similar type to those used in other computers." It goes on to imply that the disks are physically compatible but states that they are not interchangeable with other systems due to the wildly different formatting (variable rotation speed).

        – Alex Hajnal
        Jan 23 at 23:06
















      • 1





        Good find, thanks! I would expect the sentence to mean you can use the same blank disks, but disks written on one kind of drive cannot be read on the other kind.

        – rwallace
        Jan 23 at 12:21






      • 1





        AFAIR Apples Twiggy drive did so before.

        – Raffzahn
        Jan 23 at 13:45






      • 1





        @Raffzahn before the Sirius 9000? So why did they award the patent?

        – Wilson
        Jan 23 at 13:57






      • 3





        @Wilson That's something you may want to ask the patent office :)) According to Andy Herzfelds story the Twiggy drives where developed in 1981. The Lisa (with twiggys) went on sale in January 1983. The Patent was filed in October 1982. It's safe to assume that the twiggies weren't developed 3 month from idea to deivery, so I guess it's eitehr invalid by prior atr, or it's claims are more subtile on vertain parts of the logic - something different from the wy Apple (or commodore) did it.

        – Raffzahn
        Jan 23 at 14:09






      • 2





        According to this (p. 1-2, § 1.3), "the Victor 9000 uses 5 1/4-inch minifloppies of a similar type to those used in other computers." It goes on to imply that the disks are physically compatible but states that they are not interchangeable with other systems due to the wildly different formatting (variable rotation speed).

        – Alex Hajnal
        Jan 23 at 23:06










      1




      1





      Good find, thanks! I would expect the sentence to mean you can use the same blank disks, but disks written on one kind of drive cannot be read on the other kind.

      – rwallace
      Jan 23 at 12:21





      Good find, thanks! I would expect the sentence to mean you can use the same blank disks, but disks written on one kind of drive cannot be read on the other kind.

      – rwallace
      Jan 23 at 12:21




      1




      1





      AFAIR Apples Twiggy drive did so before.

      – Raffzahn
      Jan 23 at 13:45





      AFAIR Apples Twiggy drive did so before.

      – Raffzahn
      Jan 23 at 13:45




      1




      1





      @Raffzahn before the Sirius 9000? So why did they award the patent?

      – Wilson
      Jan 23 at 13:57





      @Raffzahn before the Sirius 9000? So why did they award the patent?

      – Wilson
      Jan 23 at 13:57




      3




      3





      @Wilson That's something you may want to ask the patent office :)) According to Andy Herzfelds story the Twiggy drives where developed in 1981. The Lisa (with twiggys) went on sale in January 1983. The Patent was filed in October 1982. It's safe to assume that the twiggies weren't developed 3 month from idea to deivery, so I guess it's eitehr invalid by prior atr, or it's claims are more subtile on vertain parts of the logic - something different from the wy Apple (or commodore) did it.

      – Raffzahn
      Jan 23 at 14:09





      @Wilson That's something you may want to ask the patent office :)) According to Andy Herzfelds story the Twiggy drives where developed in 1981. The Lisa (with twiggys) went on sale in January 1983. The Patent was filed in October 1982. It's safe to assume that the twiggies weren't developed 3 month from idea to deivery, so I guess it's eitehr invalid by prior atr, or it's claims are more subtile on vertain parts of the logic - something different from the wy Apple (or commodore) did it.

      – Raffzahn
      Jan 23 at 14:09




      2




      2





      According to this (p. 1-2, § 1.3), "the Victor 9000 uses 5 1/4-inch minifloppies of a similar type to those used in other computers." It goes on to imply that the disks are physically compatible but states that they are not interchangeable with other systems due to the wildly different formatting (variable rotation speed).

      – Alex Hajnal
      Jan 23 at 23:06







      According to this (p. 1-2, § 1.3), "the Victor 9000 uses 5 1/4-inch minifloppies of a similar type to those used in other computers." It goes on to imply that the disks are physically compatible but states that they are not interchangeable with other systems due to the wildly different formatting (variable rotation speed).

      – Alex Hajnal
      Jan 23 at 23:06













      3














      The 800 KB second generation Macintosh 3.5 inch floppy had CV zones (Wikipedia: Floppy disk). When the next gen floppy (1.44 MB) came out, it didn't use the CV technology, but was supported on many different OSes, relegating the CV version to the back of the bookshelf.



      You could hear the speed change if you listened closely.






      share|improve this answer





















      • 2





        As did the Apple IIgs 800 K drive

        – scruss
        Jan 24 at 1:21






      • 1





        Also the single-sided 400k Mac floppy, 1984 through 1986.

        – Whit3rd
        Jan 24 at 8:10
















      3














      The 800 KB second generation Macintosh 3.5 inch floppy had CV zones (Wikipedia: Floppy disk). When the next gen floppy (1.44 MB) came out, it didn't use the CV technology, but was supported on many different OSes, relegating the CV version to the back of the bookshelf.



      You could hear the speed change if you listened closely.






      share|improve this answer





















      • 2





        As did the Apple IIgs 800 K drive

        – scruss
        Jan 24 at 1:21






      • 1





        Also the single-sided 400k Mac floppy, 1984 through 1986.

        – Whit3rd
        Jan 24 at 8:10














      3












      3








      3







      The 800 KB second generation Macintosh 3.5 inch floppy had CV zones (Wikipedia: Floppy disk). When the next gen floppy (1.44 MB) came out, it didn't use the CV technology, but was supported on many different OSes, relegating the CV version to the back of the bookshelf.



      You could hear the speed change if you listened closely.






      share|improve this answer















      The 800 KB second generation Macintosh 3.5 inch floppy had CV zones (Wikipedia: Floppy disk). When the next gen floppy (1.44 MB) came out, it didn't use the CV technology, but was supported on many different OSes, relegating the CV version to the back of the bookshelf.



      You could hear the speed change if you listened closely.







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited Jan 24 at 4:59









      Dranon

      414211




      414211










      answered Jan 24 at 1:09









      Flydog57Flydog57

      1314




      1314








      • 2





        As did the Apple IIgs 800 K drive

        – scruss
        Jan 24 at 1:21






      • 1





        Also the single-sided 400k Mac floppy, 1984 through 1986.

        – Whit3rd
        Jan 24 at 8:10














      • 2





        As did the Apple IIgs 800 K drive

        – scruss
        Jan 24 at 1:21






      • 1





        Also the single-sided 400k Mac floppy, 1984 through 1986.

        – Whit3rd
        Jan 24 at 8:10








      2




      2





      As did the Apple IIgs 800 K drive

      – scruss
      Jan 24 at 1:21





      As did the Apple IIgs 800 K drive

      – scruss
      Jan 24 at 1:21




      1




      1





      Also the single-sided 400k Mac floppy, 1984 through 1986.

      – Whit3rd
      Jan 24 at 8:10





      Also the single-sided 400k Mac floppy, 1984 through 1986.

      – Whit3rd
      Jan 24 at 8:10











      3














      Something nobody else has mentioned is that the nature of data stored between optical media and magnetic media was very different at the time.



      During the time of floppies optical disks were mostly write once media that contained very long, sequential files. Mostly music (CDs) or video (Laserdisc). Data disks did start becoming popular towards then end of the floppy era but they were still write-once, and often structured as large packed data files.



      Write once is important, since it implies there is no disk fragmentation. Disk fragmentation would be a serious issue for CLV drives, since it could require multiple significant changes to rotation speed while reading even a small file. Large sequential files, as were common on optical media, meant seeks were rare (and consequentially significant changes to rotation speed were rare).






      share|improve this answer




























        3














        Something nobody else has mentioned is that the nature of data stored between optical media and magnetic media was very different at the time.



        During the time of floppies optical disks were mostly write once media that contained very long, sequential files. Mostly music (CDs) or video (Laserdisc). Data disks did start becoming popular towards then end of the floppy era but they were still write-once, and often structured as large packed data files.



        Write once is important, since it implies there is no disk fragmentation. Disk fragmentation would be a serious issue for CLV drives, since it could require multiple significant changes to rotation speed while reading even a small file. Large sequential files, as were common on optical media, meant seeks were rare (and consequentially significant changes to rotation speed were rare).






        share|improve this answer


























          3












          3








          3







          Something nobody else has mentioned is that the nature of data stored between optical media and magnetic media was very different at the time.



          During the time of floppies optical disks were mostly write once media that contained very long, sequential files. Mostly music (CDs) or video (Laserdisc). Data disks did start becoming popular towards then end of the floppy era but they were still write-once, and often structured as large packed data files.



          Write once is important, since it implies there is no disk fragmentation. Disk fragmentation would be a serious issue for CLV drives, since it could require multiple significant changes to rotation speed while reading even a small file. Large sequential files, as were common on optical media, meant seeks were rare (and consequentially significant changes to rotation speed were rare).






          share|improve this answer













          Something nobody else has mentioned is that the nature of data stored between optical media and magnetic media was very different at the time.



          During the time of floppies optical disks were mostly write once media that contained very long, sequential files. Mostly music (CDs) or video (Laserdisc). Data disks did start becoming popular towards then end of the floppy era but they were still write-once, and often structured as large packed data files.



          Write once is important, since it implies there is no disk fragmentation. Disk fragmentation would be a serious issue for CLV drives, since it could require multiple significant changes to rotation speed while reading even a small file. Large sequential files, as were common on optical media, meant seeks were rare (and consequentially significant changes to rotation speed were rare).







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Jan 24 at 18:33









          patrospatros

          1311




          1311























              1














              Another point not yet mentioned is that for constant linear velocity to offer the most benefit, it is necessary to use a spiral track rather than a series of rings. Every separately-writable sector on a disk has a significant amount of overhead, so using larger sectors will improve storage efficiency. If one uses constant-linear-velocity storage with consecutive rings, however, each track will lose an average of half a sector because there's no way to store half a sector on a track. Using a spiral will eliminate any need to have an integer number of sectors per revolution.



              I've read of some copy protection schemes for the Apple II wrote information in a spiral. I suspect they did so more for purposes of thwarting piracy rather than enhancing storage density, but I suspect that a disk operating system that was limited to loading and storing 32Kbyte chunks could probably fit six spiral-written chunks on what would normally be a 140K floppy.






              share|improve this answer




























                1














                Another point not yet mentioned is that for constant linear velocity to offer the most benefit, it is necessary to use a spiral track rather than a series of rings. Every separately-writable sector on a disk has a significant amount of overhead, so using larger sectors will improve storage efficiency. If one uses constant-linear-velocity storage with consecutive rings, however, each track will lose an average of half a sector because there's no way to store half a sector on a track. Using a spiral will eliminate any need to have an integer number of sectors per revolution.



                I've read of some copy protection schemes for the Apple II wrote information in a spiral. I suspect they did so more for purposes of thwarting piracy rather than enhancing storage density, but I suspect that a disk operating system that was limited to loading and storing 32Kbyte chunks could probably fit six spiral-written chunks on what would normally be a 140K floppy.






                share|improve this answer


























                  1












                  1








                  1







                  Another point not yet mentioned is that for constant linear velocity to offer the most benefit, it is necessary to use a spiral track rather than a series of rings. Every separately-writable sector on a disk has a significant amount of overhead, so using larger sectors will improve storage efficiency. If one uses constant-linear-velocity storage with consecutive rings, however, each track will lose an average of half a sector because there's no way to store half a sector on a track. Using a spiral will eliminate any need to have an integer number of sectors per revolution.



                  I've read of some copy protection schemes for the Apple II wrote information in a spiral. I suspect they did so more for purposes of thwarting piracy rather than enhancing storage density, but I suspect that a disk operating system that was limited to loading and storing 32Kbyte chunks could probably fit six spiral-written chunks on what would normally be a 140K floppy.






                  share|improve this answer













                  Another point not yet mentioned is that for constant linear velocity to offer the most benefit, it is necessary to use a spiral track rather than a series of rings. Every separately-writable sector on a disk has a significant amount of overhead, so using larger sectors will improve storage efficiency. If one uses constant-linear-velocity storage with consecutive rings, however, each track will lose an average of half a sector because there's no way to store half a sector on a track. Using a spiral will eliminate any need to have an integer number of sectors per revolution.



                  I've read of some copy protection schemes for the Apple II wrote information in a spiral. I suspect they did so more for purposes of thwarting piracy rather than enhancing storage density, but I suspect that a disk operating system that was limited to loading and storing 32Kbyte chunks could probably fit six spiral-written chunks on what would normally be a 140K floppy.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Jan 24 at 17:50









                  supercatsupercat

                  7,352740




                  7,352740






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Retrocomputing Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f8911%2fwhy-not-constant-linear-velocity-floppies%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      android studio warns about leanback feature tag usage required on manifest while using Unity exported app?

                      SQL update select statement

                      'app-layout' is not a known element: how to share Component with different Modules