Using DRAM as a camera sensor?











up vote
20
down vote

favorite
3












Back in the days when COMECON (RVHP) was cloning digital ICs usually the first wave of such ICs where in ceramic package with a glass window (similar to EPROM) to check for bugs and stuff while used. Some chips where very sensitive to light and the window was usually covered by a sticker (also similar to EPROM).



This is first image of similar package I found in Google images taken from Kyocera: Standard Ceramic Packages :



IC package with glass



There where rumors the DRAM chips like MHB4116 (16384x1bit clone of MK4116) in such package could be used as simple BW camera (similar to CCD as they are just parasitic MOS capacitors and incoming light can charge them to saturation or dissipate them more quickly). Back in the days having a camera interfaced with computer was unheard of due to lack of interfaces, ICs and expensiveness. I was trying to get my hands on some chips like that myself but was not lucky (I got just few Russian CPUs Multiplexors etc but no DRAM)



So I am curious if someone managed to do this and what parameters the camera had:




  • resolution (I assume fairly limited due to speed of control system theoretical max is 128x128 pixels)

  • color depth (I really think just BW or few shades of gray)

  • fps ?

  • what was the interface and which computer (I assume Z80 or 8080 based)

  • also the control circuitry could be intereseting

  • how was the blanking done (physical shutter or just writing zero was enough to discharge the cell?)


btw. just to spit of my frustration that I did not found any suitable DRAM chip back in the day for this I managed to do at least a Scanner instead ...










share|improve this question




















  • 2




    I presume RVHP is Rada Vzájemné Hospodářské Pomoci but I can't read Czech or Slovak. Could you explain what the acronym is referring to? Maybe you mean COMECON in general?
    – Alex Hajnal
    14 hours ago








  • 2




    What's the attribution for that image?
    – Alex Hajnal
    14 hours ago






  • 1




    Yea, that's COMECON in English. (Council for Mutual Economic Assistance)
    – Alex Hajnal
    13 hours ago








  • 2




    While every eastern nation did use mostly it's own name, like "Rat für gegenseitige Wirtschaftshilfe" or RGW, it was commonly refered to as Comecon on the outside (BTW, it's not an acronyme, but a name). Fun part, even within the SU different names where used like SEV in Russia, SEU in Belarus or REV in Ukraine :)
    – Raffzahn
    9 hours ago






  • 1




    @Spektre Not to be too insistent, but we do need to know where you got that image. Please add such information to the answer. Or else it's plagiarism; you know our policy on that. (If not, see help center.)
    – wizzwizz4
    5 hours ago















up vote
20
down vote

favorite
3












Back in the days when COMECON (RVHP) was cloning digital ICs usually the first wave of such ICs where in ceramic package with a glass window (similar to EPROM) to check for bugs and stuff while used. Some chips where very sensitive to light and the window was usually covered by a sticker (also similar to EPROM).



This is first image of similar package I found in Google images taken from Kyocera: Standard Ceramic Packages :



IC package with glass



There where rumors the DRAM chips like MHB4116 (16384x1bit clone of MK4116) in such package could be used as simple BW camera (similar to CCD as they are just parasitic MOS capacitors and incoming light can charge them to saturation or dissipate them more quickly). Back in the days having a camera interfaced with computer was unheard of due to lack of interfaces, ICs and expensiveness. I was trying to get my hands on some chips like that myself but was not lucky (I got just few Russian CPUs Multiplexors etc but no DRAM)



So I am curious if someone managed to do this and what parameters the camera had:




  • resolution (I assume fairly limited due to speed of control system theoretical max is 128x128 pixels)

  • color depth (I really think just BW or few shades of gray)

  • fps ?

  • what was the interface and which computer (I assume Z80 or 8080 based)

  • also the control circuitry could be intereseting

  • how was the blanking done (physical shutter or just writing zero was enough to discharge the cell?)


btw. just to spit of my frustration that I did not found any suitable DRAM chip back in the day for this I managed to do at least a Scanner instead ...










share|improve this question




















  • 2




    I presume RVHP is Rada Vzájemné Hospodářské Pomoci but I can't read Czech or Slovak. Could you explain what the acronym is referring to? Maybe you mean COMECON in general?
    – Alex Hajnal
    14 hours ago








  • 2




    What's the attribution for that image?
    – Alex Hajnal
    14 hours ago






  • 1




    Yea, that's COMECON in English. (Council for Mutual Economic Assistance)
    – Alex Hajnal
    13 hours ago








  • 2




    While every eastern nation did use mostly it's own name, like "Rat für gegenseitige Wirtschaftshilfe" or RGW, it was commonly refered to as Comecon on the outside (BTW, it's not an acronyme, but a name). Fun part, even within the SU different names where used like SEV in Russia, SEU in Belarus or REV in Ukraine :)
    – Raffzahn
    9 hours ago






  • 1




    @Spektre Not to be too insistent, but we do need to know where you got that image. Please add such information to the answer. Or else it's plagiarism; you know our policy on that. (If not, see help center.)
    – wizzwizz4
    5 hours ago













up vote
20
down vote

favorite
3









up vote
20
down vote

favorite
3






3





Back in the days when COMECON (RVHP) was cloning digital ICs usually the first wave of such ICs where in ceramic package with a glass window (similar to EPROM) to check for bugs and stuff while used. Some chips where very sensitive to light and the window was usually covered by a sticker (also similar to EPROM).



This is first image of similar package I found in Google images taken from Kyocera: Standard Ceramic Packages :



IC package with glass



There where rumors the DRAM chips like MHB4116 (16384x1bit clone of MK4116) in such package could be used as simple BW camera (similar to CCD as they are just parasitic MOS capacitors and incoming light can charge them to saturation or dissipate them more quickly). Back in the days having a camera interfaced with computer was unheard of due to lack of interfaces, ICs and expensiveness. I was trying to get my hands on some chips like that myself but was not lucky (I got just few Russian CPUs Multiplexors etc but no DRAM)



So I am curious if someone managed to do this and what parameters the camera had:




  • resolution (I assume fairly limited due to speed of control system theoretical max is 128x128 pixels)

  • color depth (I really think just BW or few shades of gray)

  • fps ?

  • what was the interface and which computer (I assume Z80 or 8080 based)

  • also the control circuitry could be intereseting

  • how was the blanking done (physical shutter or just writing zero was enough to discharge the cell?)


btw. just to spit of my frustration that I did not found any suitable DRAM chip back in the day for this I managed to do at least a Scanner instead ...










share|improve this question















Back in the days when COMECON (RVHP) was cloning digital ICs usually the first wave of such ICs where in ceramic package with a glass window (similar to EPROM) to check for bugs and stuff while used. Some chips where very sensitive to light and the window was usually covered by a sticker (also similar to EPROM).



This is first image of similar package I found in Google images taken from Kyocera: Standard Ceramic Packages :



IC package with glass



There where rumors the DRAM chips like MHB4116 (16384x1bit clone of MK4116) in such package could be used as simple BW camera (similar to CCD as they are just parasitic MOS capacitors and incoming light can charge them to saturation or dissipate them more quickly). Back in the days having a camera interfaced with computer was unheard of due to lack of interfaces, ICs and expensiveness. I was trying to get my hands on some chips like that myself but was not lucky (I got just few Russian CPUs Multiplexors etc but no DRAM)



So I am curious if someone managed to do this and what parameters the camera had:




  • resolution (I assume fairly limited due to speed of control system theoretical max is 128x128 pixels)

  • color depth (I really think just BW or few shades of gray)

  • fps ?

  • what was the interface and which computer (I assume Z80 or 8080 based)

  • also the control circuitry could be intereseting

  • how was the blanking done (physical shutter or just writing zero was enough to discharge the cell?)


btw. just to spit of my frustration that I did not found any suitable DRAM chip back in the day for this I managed to do at least a Scanner instead ...







hardware graphics 4116 dram






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 3 hours ago

























asked 15 hours ago









Spektre

2,730315




2,730315








  • 2




    I presume RVHP is Rada Vzájemné Hospodářské Pomoci but I can't read Czech or Slovak. Could you explain what the acronym is referring to? Maybe you mean COMECON in general?
    – Alex Hajnal
    14 hours ago








  • 2




    What's the attribution for that image?
    – Alex Hajnal
    14 hours ago






  • 1




    Yea, that's COMECON in English. (Council for Mutual Economic Assistance)
    – Alex Hajnal
    13 hours ago








  • 2




    While every eastern nation did use mostly it's own name, like "Rat für gegenseitige Wirtschaftshilfe" or RGW, it was commonly refered to as Comecon on the outside (BTW, it's not an acronyme, but a name). Fun part, even within the SU different names where used like SEV in Russia, SEU in Belarus or REV in Ukraine :)
    – Raffzahn
    9 hours ago






  • 1




    @Spektre Not to be too insistent, but we do need to know where you got that image. Please add such information to the answer. Or else it's plagiarism; you know our policy on that. (If not, see help center.)
    – wizzwizz4
    5 hours ago














  • 2




    I presume RVHP is Rada Vzájemné Hospodářské Pomoci but I can't read Czech or Slovak. Could you explain what the acronym is referring to? Maybe you mean COMECON in general?
    – Alex Hajnal
    14 hours ago








  • 2




    What's the attribution for that image?
    – Alex Hajnal
    14 hours ago






  • 1




    Yea, that's COMECON in English. (Council for Mutual Economic Assistance)
    – Alex Hajnal
    13 hours ago








  • 2




    While every eastern nation did use mostly it's own name, like "Rat für gegenseitige Wirtschaftshilfe" or RGW, it was commonly refered to as Comecon on the outside (BTW, it's not an acronyme, but a name). Fun part, even within the SU different names where used like SEV in Russia, SEU in Belarus or REV in Ukraine :)
    – Raffzahn
    9 hours ago






  • 1




    @Spektre Not to be too insistent, but we do need to know where you got that image. Please add such information to the answer. Or else it's plagiarism; you know our policy on that. (If not, see help center.)
    – wizzwizz4
    5 hours ago








2




2




I presume RVHP is Rada Vzájemné Hospodářské Pomoci but I can't read Czech or Slovak. Could you explain what the acronym is referring to? Maybe you mean COMECON in general?
– Alex Hajnal
14 hours ago






I presume RVHP is Rada Vzájemné Hospodářské Pomoci but I can't read Czech or Slovak. Could you explain what the acronym is referring to? Maybe you mean COMECON in general?
– Alex Hajnal
14 hours ago






2




2




What's the attribution for that image?
– Alex Hajnal
14 hours ago




What's the attribution for that image?
– Alex Hajnal
14 hours ago




1




1




Yea, that's COMECON in English. (Council for Mutual Economic Assistance)
– Alex Hajnal
13 hours ago






Yea, that's COMECON in English. (Council for Mutual Economic Assistance)
– Alex Hajnal
13 hours ago






2




2




While every eastern nation did use mostly it's own name, like "Rat für gegenseitige Wirtschaftshilfe" or RGW, it was commonly refered to as Comecon on the outside (BTW, it's not an acronyme, but a name). Fun part, even within the SU different names where used like SEV in Russia, SEU in Belarus or REV in Ukraine :)
– Raffzahn
9 hours ago




While every eastern nation did use mostly it's own name, like "Rat für gegenseitige Wirtschaftshilfe" or RGW, it was commonly refered to as Comecon on the outside (BTW, it's not an acronyme, but a name). Fun part, even within the SU different names where used like SEV in Russia, SEU in Belarus or REV in Ukraine :)
– Raffzahn
9 hours ago




1




1




@Spektre Not to be too insistent, but we do need to know where you got that image. Please add such information to the answer. Or else it's plagiarism; you know our policy on that. (If not, see help center.)
– wizzwizz4
5 hours ago




@Spektre Not to be too insistent, but we do need to know where you got that image. Please add such information to the answer. Or else it's plagiarism; you know our policy on that. (If not, see help center.)
– wizzwizz4
5 hours ago










3 Answers
3






active

oldest

votes

















up vote
23
down vote













That sounds a lot like the Cromemco Cyclops. Released in 1975, it used a modified1 MOS 1kbit DRAM2 to capture a 32×32 black and white or greyscale image. The memory cells were initially set to all 1s. As they were exposed to light they would progressively switch to 0s; the more light hitting a cell, the faster the transition4. By making multiple read passes, a greyscale image could be read. The camera was sold with a case, lens, etc. along with controller cards for use in an S-100 bus computer. Given that the system was comprised entirely of off-the-shelf parts (with only one minor modification) and included complete source code it would have been trivial to clone both in the Eastern Bloc and elsewhere.



1 Modified meaning replacing the opaque die cover with an transparent one.



2 The same technique would probably also work fine with higher density non-buffered3 DRAMs.



3 Thanks to Raffzahn for pointing that out.



4 This results in a negative image when it is read out: 0s in the bright areas, 1s in the dark portions.



The image sensor chip:



Cromemco Cyclops sensor chip
Source: Wikimedia Commons (Public domain)



Reading through the camera manual it seems the camera itself comprised of a case, lens, and 3 circuit boards. The front board had the image sensor, a sequential address generator for reading out the values, and two bias LEDs used to improve sensitivity in low-light situations. The second board contained support cicuitry, and the third board contained the power supply and IO transceiver. Communication with the camera was over a pair of differential lines (one input pair and one output pair).



There is no mention of frame rate in the camera manual however in the interface manual (see below) there is a mention of a clock signal (1µs per pixel) and initialization time (5µs for regular capture, 17µs for capture with the bias LEDs active); it took as long to reset the memory cells as it did to read a single monochrome frame. Ignoring the setup time, the capture time for a single monochrome frame is 1024µs or ~976 frames per second. For full bit-depth greyscale images the sensor would be read 15 times in 15.36ms resulting in a maximum frame rate of ~65 frames per second (16.39ms or ~61 frames per second including initialization). The interface supported four exposure settings which modified the capture rate5; these resulted in greyscale frame rates of ~61, ~22.5, ~14, and ~10 frames per second. 15 reads per greyscale frame means the final, processed images were probably 4 bits per pixel (24 = 16). I'd have to read the camera and controller schematics and driver code more closely to be sure about any of the above.



The computer interface used a pair of cards that plugged into an i8080-based S-100 bus system. These cards consisted almost entirely of 74-series ICs. Each card set could control up to 16 cameras. DMA was used to transfer images to the controlling system's RAM and an interrupt could be generated for each captured frame. Use of this card set was optional; the camera manual (mentioned above) describes the interface in detail and gives an example of displaying the image directly on an oscilloscope. The sample code provided is for an i8080-based system but I see no reason why the card set couldn't be adapted to S-100 systems using different CPUs.



Both of the above-linked documents include complete schematics, parts lists, and IO protocol descriptions.





5 By adding a delay of 0, 2, 4, or 6 ms between each complete read of the memory (i.e. every 1024 bit reads).






share|improve this answer























  • from the images looks like the blanking was done electrically (no physical shutter). I was not sure if the charge was dissipated or added by the light and still not sure as the latter chips might use a different gates but dissipation makes more sense.
    – Spektre
    13 hours ago








  • 3




    I'm not totally clear on that but it seems like there was no physical shutter. The frame rate would depend on how brightly lit the sensor was and the bit-depth one wanted to achieve. Starting a new capture would be done by programming all 1s, kind of an electronic 'shutter'. WRT the physics, I'm not sure either but I too suspect charge dissipation.
    – Alex Hajnal
    13 hours ago








  • 1




    You can see an interesting project using the Cyclops in the early 80s at youtu.be/2y5oVHNfbf8 . You can't see the camera particularly well, but it's used to track a ball bearing rolling around a 2D maze. Since the resolution is so low the camera had to be physically panned in two axes to keep the ball in its field of view.
    – Joe Lee-Moyet
    9 hours ago










  • Comments are not for extended discussion; this conversation has been moved to chat.
    – wizzwizz4
    5 hours ago










  • Thanks for talking about the capture speed. I saw that it could be used for video, so I assumed it was working at speeds of at least 15fps, but didn't dig deep enough to get the details. Very interesting!
    – JPhi1618
    3 hours ago


















up vote
5
down vote













Alex Hajnal's answer pretty well describes what I believe is the first and eventually only commercial available camera that directly used RAM chips, the Cyclops (*1). It started out as a hobby level project, about the same time chip manufacturers did build the first dedicated CCD camera elements. CCDs were like the super hype of the 70s - at least to electronic freaks. For chip manufacturers, it wasn't a big deal to add secondary circuitry (like counters and DAC) directly on chip, and it does make a lot sense to layout the die to support the purpose, doesn't it? DRAMs are not laid out that way, but to simplify structure and speed up access.



The CCD effect was discovered 1969 independent of DRAM development and based on the implementation of 1960s bucket based delay lines in silicon. The later DRAM development was based around the same idea of using a capacitor to hold a charge. Since silicon is prone to photon reception, the use as detectors is quite obvious.



An important part is that the whole setup only works if the analogue structure of the storage cell (capacitor) is directly available at the output pin, not hidden by digital line drivers. This is only true for some very early DRAM circuits, as later (including 4116) do use buffering drivers. Also their organization is no longer as a simple square matrix, like with 1 KiBit DRAM, but organized as at least two different blocks with sense amplifiers and decoders in between, thus making them in addition unusable for camera purpose (*2).



Mostek's MK4096 4 KiBit RAM is about the last generation with only a single RAM cell array organized as 64x64 bits (*3).



Long story short, there is no luck for you to make it happen with a 4116. At least not the same way as the Cyclops. A pure B&W may still work with a lot of fine tuning.





*1 - Here is a nice timeline and description of the basic workings in non-electronicsese.



*2 - A picture where only the upper and lower 40% are captured isn't very useful either - and using only one side would result in only about 25% of all cells used (*4), thus making a 16 KiBit RAM-CCD no more useful than a 1 KiB one.



This is BTW also the reason why chip designers used a physical structure of two blocks with 128x64 cells each - thus the wiring, to reach each cell, for a 16 KiBit RAM wasn't more complex and space consuming than for a 4 KiBit.



*3 - The 64x64 array isn't as square as it seems, but almost 16:9 ... did they plan ahead for HDTV?



*4 - 31% with a 4:3 picture format. Then again, with some lens tricks the entire half might be used - though 128x64 is a weird resolution, isn't it?






share|improve this answer



















  • 1




    The *1 link is particularly interesting as it goes into the theory of operation and covers some of the low-level implementation details.
    – Alex Hajnal
    7 hours ago


















up vote
0
down vote













We tried it in the lab, circa 1984.



I worked with a hardware team and somewhere they'd read an article, the gist of which was something like:




  • write all 1s to the DRAM

  • ensure you don't have any hardware dynamic RAM refresh going on

  • expose it for a given period

  • read the decayed bits back


I believe that we ended up having to write 1s or 0s depending on the bit position as some of the RAM bits were inverted.



Sadly we never tried it with a lens, but I definitely remember we showed it was light-sensitive, and fiddly.



We did it on a single board computer our company designed, which was a 160 x 100 mm, 6809 CPU with 64 Kbyte DRAM. Out of cost-engineering, there was no dynamic RAM refresh circuitry. (Instead we used a non-maskable interrupt to run through enough addresses to keep the DRAM refreshed; from memory it was something like 64 or 128.) We would have written a special test program in EEPROM, with the NMI switched off.



I believe with did it with US-made milspec chips: I certainly remember we had very few chips in ceramic packaging other than EEPROM and the occasional CPU.






share|improve this answer










New contributor




jonathanjo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.


















    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "648"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














     

    draft saved


    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f8328%2fusing-dram-as-a-camera-sensor%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    3 Answers
    3






    active

    oldest

    votes








    3 Answers
    3






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    23
    down vote













    That sounds a lot like the Cromemco Cyclops. Released in 1975, it used a modified1 MOS 1kbit DRAM2 to capture a 32×32 black and white or greyscale image. The memory cells were initially set to all 1s. As they were exposed to light they would progressively switch to 0s; the more light hitting a cell, the faster the transition4. By making multiple read passes, a greyscale image could be read. The camera was sold with a case, lens, etc. along with controller cards for use in an S-100 bus computer. Given that the system was comprised entirely of off-the-shelf parts (with only one minor modification) and included complete source code it would have been trivial to clone both in the Eastern Bloc and elsewhere.



    1 Modified meaning replacing the opaque die cover with an transparent one.



    2 The same technique would probably also work fine with higher density non-buffered3 DRAMs.



    3 Thanks to Raffzahn for pointing that out.



    4 This results in a negative image when it is read out: 0s in the bright areas, 1s in the dark portions.



    The image sensor chip:



    Cromemco Cyclops sensor chip
    Source: Wikimedia Commons (Public domain)



    Reading through the camera manual it seems the camera itself comprised of a case, lens, and 3 circuit boards. The front board had the image sensor, a sequential address generator for reading out the values, and two bias LEDs used to improve sensitivity in low-light situations. The second board contained support cicuitry, and the third board contained the power supply and IO transceiver. Communication with the camera was over a pair of differential lines (one input pair and one output pair).



    There is no mention of frame rate in the camera manual however in the interface manual (see below) there is a mention of a clock signal (1µs per pixel) and initialization time (5µs for regular capture, 17µs for capture with the bias LEDs active); it took as long to reset the memory cells as it did to read a single monochrome frame. Ignoring the setup time, the capture time for a single monochrome frame is 1024µs or ~976 frames per second. For full bit-depth greyscale images the sensor would be read 15 times in 15.36ms resulting in a maximum frame rate of ~65 frames per second (16.39ms or ~61 frames per second including initialization). The interface supported four exposure settings which modified the capture rate5; these resulted in greyscale frame rates of ~61, ~22.5, ~14, and ~10 frames per second. 15 reads per greyscale frame means the final, processed images were probably 4 bits per pixel (24 = 16). I'd have to read the camera and controller schematics and driver code more closely to be sure about any of the above.



    The computer interface used a pair of cards that plugged into an i8080-based S-100 bus system. These cards consisted almost entirely of 74-series ICs. Each card set could control up to 16 cameras. DMA was used to transfer images to the controlling system's RAM and an interrupt could be generated for each captured frame. Use of this card set was optional; the camera manual (mentioned above) describes the interface in detail and gives an example of displaying the image directly on an oscilloscope. The sample code provided is for an i8080-based system but I see no reason why the card set couldn't be adapted to S-100 systems using different CPUs.



    Both of the above-linked documents include complete schematics, parts lists, and IO protocol descriptions.





    5 By adding a delay of 0, 2, 4, or 6 ms between each complete read of the memory (i.e. every 1024 bit reads).






    share|improve this answer























    • from the images looks like the blanking was done electrically (no physical shutter). I was not sure if the charge was dissipated or added by the light and still not sure as the latter chips might use a different gates but dissipation makes more sense.
      – Spektre
      13 hours ago








    • 3




      I'm not totally clear on that but it seems like there was no physical shutter. The frame rate would depend on how brightly lit the sensor was and the bit-depth one wanted to achieve. Starting a new capture would be done by programming all 1s, kind of an electronic 'shutter'. WRT the physics, I'm not sure either but I too suspect charge dissipation.
      – Alex Hajnal
      13 hours ago








    • 1




      You can see an interesting project using the Cyclops in the early 80s at youtu.be/2y5oVHNfbf8 . You can't see the camera particularly well, but it's used to track a ball bearing rolling around a 2D maze. Since the resolution is so low the camera had to be physically panned in two axes to keep the ball in its field of view.
      – Joe Lee-Moyet
      9 hours ago










    • Comments are not for extended discussion; this conversation has been moved to chat.
      – wizzwizz4
      5 hours ago










    • Thanks for talking about the capture speed. I saw that it could be used for video, so I assumed it was working at speeds of at least 15fps, but didn't dig deep enough to get the details. Very interesting!
      – JPhi1618
      3 hours ago















    up vote
    23
    down vote













    That sounds a lot like the Cromemco Cyclops. Released in 1975, it used a modified1 MOS 1kbit DRAM2 to capture a 32×32 black and white or greyscale image. The memory cells were initially set to all 1s. As they were exposed to light they would progressively switch to 0s; the more light hitting a cell, the faster the transition4. By making multiple read passes, a greyscale image could be read. The camera was sold with a case, lens, etc. along with controller cards for use in an S-100 bus computer. Given that the system was comprised entirely of off-the-shelf parts (with only one minor modification) and included complete source code it would have been trivial to clone both in the Eastern Bloc and elsewhere.



    1 Modified meaning replacing the opaque die cover with an transparent one.



    2 The same technique would probably also work fine with higher density non-buffered3 DRAMs.



    3 Thanks to Raffzahn for pointing that out.



    4 This results in a negative image when it is read out: 0s in the bright areas, 1s in the dark portions.



    The image sensor chip:



    Cromemco Cyclops sensor chip
    Source: Wikimedia Commons (Public domain)



    Reading through the camera manual it seems the camera itself comprised of a case, lens, and 3 circuit boards. The front board had the image sensor, a sequential address generator for reading out the values, and two bias LEDs used to improve sensitivity in low-light situations. The second board contained support cicuitry, and the third board contained the power supply and IO transceiver. Communication with the camera was over a pair of differential lines (one input pair and one output pair).



    There is no mention of frame rate in the camera manual however in the interface manual (see below) there is a mention of a clock signal (1µs per pixel) and initialization time (5µs for regular capture, 17µs for capture with the bias LEDs active); it took as long to reset the memory cells as it did to read a single monochrome frame. Ignoring the setup time, the capture time for a single monochrome frame is 1024µs or ~976 frames per second. For full bit-depth greyscale images the sensor would be read 15 times in 15.36ms resulting in a maximum frame rate of ~65 frames per second (16.39ms or ~61 frames per second including initialization). The interface supported four exposure settings which modified the capture rate5; these resulted in greyscale frame rates of ~61, ~22.5, ~14, and ~10 frames per second. 15 reads per greyscale frame means the final, processed images were probably 4 bits per pixel (24 = 16). I'd have to read the camera and controller schematics and driver code more closely to be sure about any of the above.



    The computer interface used a pair of cards that plugged into an i8080-based S-100 bus system. These cards consisted almost entirely of 74-series ICs. Each card set could control up to 16 cameras. DMA was used to transfer images to the controlling system's RAM and an interrupt could be generated for each captured frame. Use of this card set was optional; the camera manual (mentioned above) describes the interface in detail and gives an example of displaying the image directly on an oscilloscope. The sample code provided is for an i8080-based system but I see no reason why the card set couldn't be adapted to S-100 systems using different CPUs.



    Both of the above-linked documents include complete schematics, parts lists, and IO protocol descriptions.





    5 By adding a delay of 0, 2, 4, or 6 ms between each complete read of the memory (i.e. every 1024 bit reads).






    share|improve this answer























    • from the images looks like the blanking was done electrically (no physical shutter). I was not sure if the charge was dissipated or added by the light and still not sure as the latter chips might use a different gates but dissipation makes more sense.
      – Spektre
      13 hours ago








    • 3




      I'm not totally clear on that but it seems like there was no physical shutter. The frame rate would depend on how brightly lit the sensor was and the bit-depth one wanted to achieve. Starting a new capture would be done by programming all 1s, kind of an electronic 'shutter'. WRT the physics, I'm not sure either but I too suspect charge dissipation.
      – Alex Hajnal
      13 hours ago








    • 1




      You can see an interesting project using the Cyclops in the early 80s at youtu.be/2y5oVHNfbf8 . You can't see the camera particularly well, but it's used to track a ball bearing rolling around a 2D maze. Since the resolution is so low the camera had to be physically panned in two axes to keep the ball in its field of view.
      – Joe Lee-Moyet
      9 hours ago










    • Comments are not for extended discussion; this conversation has been moved to chat.
      – wizzwizz4
      5 hours ago










    • Thanks for talking about the capture speed. I saw that it could be used for video, so I assumed it was working at speeds of at least 15fps, but didn't dig deep enough to get the details. Very interesting!
      – JPhi1618
      3 hours ago













    up vote
    23
    down vote










    up vote
    23
    down vote









    That sounds a lot like the Cromemco Cyclops. Released in 1975, it used a modified1 MOS 1kbit DRAM2 to capture a 32×32 black and white or greyscale image. The memory cells were initially set to all 1s. As they were exposed to light they would progressively switch to 0s; the more light hitting a cell, the faster the transition4. By making multiple read passes, a greyscale image could be read. The camera was sold with a case, lens, etc. along with controller cards for use in an S-100 bus computer. Given that the system was comprised entirely of off-the-shelf parts (with only one minor modification) and included complete source code it would have been trivial to clone both in the Eastern Bloc and elsewhere.



    1 Modified meaning replacing the opaque die cover with an transparent one.



    2 The same technique would probably also work fine with higher density non-buffered3 DRAMs.



    3 Thanks to Raffzahn for pointing that out.



    4 This results in a negative image when it is read out: 0s in the bright areas, 1s in the dark portions.



    The image sensor chip:



    Cromemco Cyclops sensor chip
    Source: Wikimedia Commons (Public domain)



    Reading through the camera manual it seems the camera itself comprised of a case, lens, and 3 circuit boards. The front board had the image sensor, a sequential address generator for reading out the values, and two bias LEDs used to improve sensitivity in low-light situations. The second board contained support cicuitry, and the third board contained the power supply and IO transceiver. Communication with the camera was over a pair of differential lines (one input pair and one output pair).



    There is no mention of frame rate in the camera manual however in the interface manual (see below) there is a mention of a clock signal (1µs per pixel) and initialization time (5µs for regular capture, 17µs for capture with the bias LEDs active); it took as long to reset the memory cells as it did to read a single monochrome frame. Ignoring the setup time, the capture time for a single monochrome frame is 1024µs or ~976 frames per second. For full bit-depth greyscale images the sensor would be read 15 times in 15.36ms resulting in a maximum frame rate of ~65 frames per second (16.39ms or ~61 frames per second including initialization). The interface supported four exposure settings which modified the capture rate5; these resulted in greyscale frame rates of ~61, ~22.5, ~14, and ~10 frames per second. 15 reads per greyscale frame means the final, processed images were probably 4 bits per pixel (24 = 16). I'd have to read the camera and controller schematics and driver code more closely to be sure about any of the above.



    The computer interface used a pair of cards that plugged into an i8080-based S-100 bus system. These cards consisted almost entirely of 74-series ICs. Each card set could control up to 16 cameras. DMA was used to transfer images to the controlling system's RAM and an interrupt could be generated for each captured frame. Use of this card set was optional; the camera manual (mentioned above) describes the interface in detail and gives an example of displaying the image directly on an oscilloscope. The sample code provided is for an i8080-based system but I see no reason why the card set couldn't be adapted to S-100 systems using different CPUs.



    Both of the above-linked documents include complete schematics, parts lists, and IO protocol descriptions.





    5 By adding a delay of 0, 2, 4, or 6 ms between each complete read of the memory (i.e. every 1024 bit reads).






    share|improve this answer














    That sounds a lot like the Cromemco Cyclops. Released in 1975, it used a modified1 MOS 1kbit DRAM2 to capture a 32×32 black and white or greyscale image. The memory cells were initially set to all 1s. As they were exposed to light they would progressively switch to 0s; the more light hitting a cell, the faster the transition4. By making multiple read passes, a greyscale image could be read. The camera was sold with a case, lens, etc. along with controller cards for use in an S-100 bus computer. Given that the system was comprised entirely of off-the-shelf parts (with only one minor modification) and included complete source code it would have been trivial to clone both in the Eastern Bloc and elsewhere.



    1 Modified meaning replacing the opaque die cover with an transparent one.



    2 The same technique would probably also work fine with higher density non-buffered3 DRAMs.



    3 Thanks to Raffzahn for pointing that out.



    4 This results in a negative image when it is read out: 0s in the bright areas, 1s in the dark portions.



    The image sensor chip:



    Cromemco Cyclops sensor chip
    Source: Wikimedia Commons (Public domain)



    Reading through the camera manual it seems the camera itself comprised of a case, lens, and 3 circuit boards. The front board had the image sensor, a sequential address generator for reading out the values, and two bias LEDs used to improve sensitivity in low-light situations. The second board contained support cicuitry, and the third board contained the power supply and IO transceiver. Communication with the camera was over a pair of differential lines (one input pair and one output pair).



    There is no mention of frame rate in the camera manual however in the interface manual (see below) there is a mention of a clock signal (1µs per pixel) and initialization time (5µs for regular capture, 17µs for capture with the bias LEDs active); it took as long to reset the memory cells as it did to read a single monochrome frame. Ignoring the setup time, the capture time for a single monochrome frame is 1024µs or ~976 frames per second. For full bit-depth greyscale images the sensor would be read 15 times in 15.36ms resulting in a maximum frame rate of ~65 frames per second (16.39ms or ~61 frames per second including initialization). The interface supported four exposure settings which modified the capture rate5; these resulted in greyscale frame rates of ~61, ~22.5, ~14, and ~10 frames per second. 15 reads per greyscale frame means the final, processed images were probably 4 bits per pixel (24 = 16). I'd have to read the camera and controller schematics and driver code more closely to be sure about any of the above.



    The computer interface used a pair of cards that plugged into an i8080-based S-100 bus system. These cards consisted almost entirely of 74-series ICs. Each card set could control up to 16 cameras. DMA was used to transfer images to the controlling system's RAM and an interrupt could be generated for each captured frame. Use of this card set was optional; the camera manual (mentioned above) describes the interface in detail and gives an example of displaying the image directly on an oscilloscope. The sample code provided is for an i8080-based system but I see no reason why the card set couldn't be adapted to S-100 systems using different CPUs.



    Both of the above-linked documents include complete schematics, parts lists, and IO protocol descriptions.





    5 By adding a delay of 0, 2, 4, or 6 ms between each complete read of the memory (i.e. every 1024 bit reads).







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited 26 mins ago

























    answered 14 hours ago









    Alex Hajnal

    3,22831131




    3,22831131












    • from the images looks like the blanking was done electrically (no physical shutter). I was not sure if the charge was dissipated or added by the light and still not sure as the latter chips might use a different gates but dissipation makes more sense.
      – Spektre
      13 hours ago








    • 3




      I'm not totally clear on that but it seems like there was no physical shutter. The frame rate would depend on how brightly lit the sensor was and the bit-depth one wanted to achieve. Starting a new capture would be done by programming all 1s, kind of an electronic 'shutter'. WRT the physics, I'm not sure either but I too suspect charge dissipation.
      – Alex Hajnal
      13 hours ago








    • 1




      You can see an interesting project using the Cyclops in the early 80s at youtu.be/2y5oVHNfbf8 . You can't see the camera particularly well, but it's used to track a ball bearing rolling around a 2D maze. Since the resolution is so low the camera had to be physically panned in two axes to keep the ball in its field of view.
      – Joe Lee-Moyet
      9 hours ago










    • Comments are not for extended discussion; this conversation has been moved to chat.
      – wizzwizz4
      5 hours ago










    • Thanks for talking about the capture speed. I saw that it could be used for video, so I assumed it was working at speeds of at least 15fps, but didn't dig deep enough to get the details. Very interesting!
      – JPhi1618
      3 hours ago


















    • from the images looks like the blanking was done electrically (no physical shutter). I was not sure if the charge was dissipated or added by the light and still not sure as the latter chips might use a different gates but dissipation makes more sense.
      – Spektre
      13 hours ago








    • 3




      I'm not totally clear on that but it seems like there was no physical shutter. The frame rate would depend on how brightly lit the sensor was and the bit-depth one wanted to achieve. Starting a new capture would be done by programming all 1s, kind of an electronic 'shutter'. WRT the physics, I'm not sure either but I too suspect charge dissipation.
      – Alex Hajnal
      13 hours ago








    • 1




      You can see an interesting project using the Cyclops in the early 80s at youtu.be/2y5oVHNfbf8 . You can't see the camera particularly well, but it's used to track a ball bearing rolling around a 2D maze. Since the resolution is so low the camera had to be physically panned in two axes to keep the ball in its field of view.
      – Joe Lee-Moyet
      9 hours ago










    • Comments are not for extended discussion; this conversation has been moved to chat.
      – wizzwizz4
      5 hours ago










    • Thanks for talking about the capture speed. I saw that it could be used for video, so I assumed it was working at speeds of at least 15fps, but didn't dig deep enough to get the details. Very interesting!
      – JPhi1618
      3 hours ago
















    from the images looks like the blanking was done electrically (no physical shutter). I was not sure if the charge was dissipated or added by the light and still not sure as the latter chips might use a different gates but dissipation makes more sense.
    – Spektre
    13 hours ago






    from the images looks like the blanking was done electrically (no physical shutter). I was not sure if the charge was dissipated or added by the light and still not sure as the latter chips might use a different gates but dissipation makes more sense.
    – Spektre
    13 hours ago






    3




    3




    I'm not totally clear on that but it seems like there was no physical shutter. The frame rate would depend on how brightly lit the sensor was and the bit-depth one wanted to achieve. Starting a new capture would be done by programming all 1s, kind of an electronic 'shutter'. WRT the physics, I'm not sure either but I too suspect charge dissipation.
    – Alex Hajnal
    13 hours ago






    I'm not totally clear on that but it seems like there was no physical shutter. The frame rate would depend on how brightly lit the sensor was and the bit-depth one wanted to achieve. Starting a new capture would be done by programming all 1s, kind of an electronic 'shutter'. WRT the physics, I'm not sure either but I too suspect charge dissipation.
    – Alex Hajnal
    13 hours ago






    1




    1




    You can see an interesting project using the Cyclops in the early 80s at youtu.be/2y5oVHNfbf8 . You can't see the camera particularly well, but it's used to track a ball bearing rolling around a 2D maze. Since the resolution is so low the camera had to be physically panned in two axes to keep the ball in its field of view.
    – Joe Lee-Moyet
    9 hours ago




    You can see an interesting project using the Cyclops in the early 80s at youtu.be/2y5oVHNfbf8 . You can't see the camera particularly well, but it's used to track a ball bearing rolling around a 2D maze. Since the resolution is so low the camera had to be physically panned in two axes to keep the ball in its field of view.
    – Joe Lee-Moyet
    9 hours ago












    Comments are not for extended discussion; this conversation has been moved to chat.
    – wizzwizz4
    5 hours ago




    Comments are not for extended discussion; this conversation has been moved to chat.
    – wizzwizz4
    5 hours ago












    Thanks for talking about the capture speed. I saw that it could be used for video, so I assumed it was working at speeds of at least 15fps, but didn't dig deep enough to get the details. Very interesting!
    – JPhi1618
    3 hours ago




    Thanks for talking about the capture speed. I saw that it could be used for video, so I assumed it was working at speeds of at least 15fps, but didn't dig deep enough to get the details. Very interesting!
    – JPhi1618
    3 hours ago










    up vote
    5
    down vote













    Alex Hajnal's answer pretty well describes what I believe is the first and eventually only commercial available camera that directly used RAM chips, the Cyclops (*1). It started out as a hobby level project, about the same time chip manufacturers did build the first dedicated CCD camera elements. CCDs were like the super hype of the 70s - at least to electronic freaks. For chip manufacturers, it wasn't a big deal to add secondary circuitry (like counters and DAC) directly on chip, and it does make a lot sense to layout the die to support the purpose, doesn't it? DRAMs are not laid out that way, but to simplify structure and speed up access.



    The CCD effect was discovered 1969 independent of DRAM development and based on the implementation of 1960s bucket based delay lines in silicon. The later DRAM development was based around the same idea of using a capacitor to hold a charge. Since silicon is prone to photon reception, the use as detectors is quite obvious.



    An important part is that the whole setup only works if the analogue structure of the storage cell (capacitor) is directly available at the output pin, not hidden by digital line drivers. This is only true for some very early DRAM circuits, as later (including 4116) do use buffering drivers. Also their organization is no longer as a simple square matrix, like with 1 KiBit DRAM, but organized as at least two different blocks with sense amplifiers and decoders in between, thus making them in addition unusable for camera purpose (*2).



    Mostek's MK4096 4 KiBit RAM is about the last generation with only a single RAM cell array organized as 64x64 bits (*3).



    Long story short, there is no luck for you to make it happen with a 4116. At least not the same way as the Cyclops. A pure B&W may still work with a lot of fine tuning.





    *1 - Here is a nice timeline and description of the basic workings in non-electronicsese.



    *2 - A picture where only the upper and lower 40% are captured isn't very useful either - and using only one side would result in only about 25% of all cells used (*4), thus making a 16 KiBit RAM-CCD no more useful than a 1 KiB one.



    This is BTW also the reason why chip designers used a physical structure of two blocks with 128x64 cells each - thus the wiring, to reach each cell, for a 16 KiBit RAM wasn't more complex and space consuming than for a 4 KiBit.



    *3 - The 64x64 array isn't as square as it seems, but almost 16:9 ... did they plan ahead for HDTV?



    *4 - 31% with a 4:3 picture format. Then again, with some lens tricks the entire half might be used - though 128x64 is a weird resolution, isn't it?






    share|improve this answer



















    • 1




      The *1 link is particularly interesting as it goes into the theory of operation and covers some of the low-level implementation details.
      – Alex Hajnal
      7 hours ago















    up vote
    5
    down vote













    Alex Hajnal's answer pretty well describes what I believe is the first and eventually only commercial available camera that directly used RAM chips, the Cyclops (*1). It started out as a hobby level project, about the same time chip manufacturers did build the first dedicated CCD camera elements. CCDs were like the super hype of the 70s - at least to electronic freaks. For chip manufacturers, it wasn't a big deal to add secondary circuitry (like counters and DAC) directly on chip, and it does make a lot sense to layout the die to support the purpose, doesn't it? DRAMs are not laid out that way, but to simplify structure and speed up access.



    The CCD effect was discovered 1969 independent of DRAM development and based on the implementation of 1960s bucket based delay lines in silicon. The later DRAM development was based around the same idea of using a capacitor to hold a charge. Since silicon is prone to photon reception, the use as detectors is quite obvious.



    An important part is that the whole setup only works if the analogue structure of the storage cell (capacitor) is directly available at the output pin, not hidden by digital line drivers. This is only true for some very early DRAM circuits, as later (including 4116) do use buffering drivers. Also their organization is no longer as a simple square matrix, like with 1 KiBit DRAM, but organized as at least two different blocks with sense amplifiers and decoders in between, thus making them in addition unusable for camera purpose (*2).



    Mostek's MK4096 4 KiBit RAM is about the last generation with only a single RAM cell array organized as 64x64 bits (*3).



    Long story short, there is no luck for you to make it happen with a 4116. At least not the same way as the Cyclops. A pure B&W may still work with a lot of fine tuning.





    *1 - Here is a nice timeline and description of the basic workings in non-electronicsese.



    *2 - A picture where only the upper and lower 40% are captured isn't very useful either - and using only one side would result in only about 25% of all cells used (*4), thus making a 16 KiBit RAM-CCD no more useful than a 1 KiB one.



    This is BTW also the reason why chip designers used a physical structure of two blocks with 128x64 cells each - thus the wiring, to reach each cell, for a 16 KiBit RAM wasn't more complex and space consuming than for a 4 KiBit.



    *3 - The 64x64 array isn't as square as it seems, but almost 16:9 ... did they plan ahead for HDTV?



    *4 - 31% with a 4:3 picture format. Then again, with some lens tricks the entire half might be used - though 128x64 is a weird resolution, isn't it?






    share|improve this answer



















    • 1




      The *1 link is particularly interesting as it goes into the theory of operation and covers some of the low-level implementation details.
      – Alex Hajnal
      7 hours ago













    up vote
    5
    down vote










    up vote
    5
    down vote









    Alex Hajnal's answer pretty well describes what I believe is the first and eventually only commercial available camera that directly used RAM chips, the Cyclops (*1). It started out as a hobby level project, about the same time chip manufacturers did build the first dedicated CCD camera elements. CCDs were like the super hype of the 70s - at least to electronic freaks. For chip manufacturers, it wasn't a big deal to add secondary circuitry (like counters and DAC) directly on chip, and it does make a lot sense to layout the die to support the purpose, doesn't it? DRAMs are not laid out that way, but to simplify structure and speed up access.



    The CCD effect was discovered 1969 independent of DRAM development and based on the implementation of 1960s bucket based delay lines in silicon. The later DRAM development was based around the same idea of using a capacitor to hold a charge. Since silicon is prone to photon reception, the use as detectors is quite obvious.



    An important part is that the whole setup only works if the analogue structure of the storage cell (capacitor) is directly available at the output pin, not hidden by digital line drivers. This is only true for some very early DRAM circuits, as later (including 4116) do use buffering drivers. Also their organization is no longer as a simple square matrix, like with 1 KiBit DRAM, but organized as at least two different blocks with sense amplifiers and decoders in between, thus making them in addition unusable for camera purpose (*2).



    Mostek's MK4096 4 KiBit RAM is about the last generation with only a single RAM cell array organized as 64x64 bits (*3).



    Long story short, there is no luck for you to make it happen with a 4116. At least not the same way as the Cyclops. A pure B&W may still work with a lot of fine tuning.





    *1 - Here is a nice timeline and description of the basic workings in non-electronicsese.



    *2 - A picture where only the upper and lower 40% are captured isn't very useful either - and using only one side would result in only about 25% of all cells used (*4), thus making a 16 KiBit RAM-CCD no more useful than a 1 KiB one.



    This is BTW also the reason why chip designers used a physical structure of two blocks with 128x64 cells each - thus the wiring, to reach each cell, for a 16 KiBit RAM wasn't more complex and space consuming than for a 4 KiBit.



    *3 - The 64x64 array isn't as square as it seems, but almost 16:9 ... did they plan ahead for HDTV?



    *4 - 31% with a 4:3 picture format. Then again, with some lens tricks the entire half might be used - though 128x64 is a weird resolution, isn't it?






    share|improve this answer














    Alex Hajnal's answer pretty well describes what I believe is the first and eventually only commercial available camera that directly used RAM chips, the Cyclops (*1). It started out as a hobby level project, about the same time chip manufacturers did build the first dedicated CCD camera elements. CCDs were like the super hype of the 70s - at least to electronic freaks. For chip manufacturers, it wasn't a big deal to add secondary circuitry (like counters and DAC) directly on chip, and it does make a lot sense to layout the die to support the purpose, doesn't it? DRAMs are not laid out that way, but to simplify structure and speed up access.



    The CCD effect was discovered 1969 independent of DRAM development and based on the implementation of 1960s bucket based delay lines in silicon. The later DRAM development was based around the same idea of using a capacitor to hold a charge. Since silicon is prone to photon reception, the use as detectors is quite obvious.



    An important part is that the whole setup only works if the analogue structure of the storage cell (capacitor) is directly available at the output pin, not hidden by digital line drivers. This is only true for some very early DRAM circuits, as later (including 4116) do use buffering drivers. Also their organization is no longer as a simple square matrix, like with 1 KiBit DRAM, but organized as at least two different blocks with sense amplifiers and decoders in between, thus making them in addition unusable for camera purpose (*2).



    Mostek's MK4096 4 KiBit RAM is about the last generation with only a single RAM cell array organized as 64x64 bits (*3).



    Long story short, there is no luck for you to make it happen with a 4116. At least not the same way as the Cyclops. A pure B&W may still work with a lot of fine tuning.





    *1 - Here is a nice timeline and description of the basic workings in non-electronicsese.



    *2 - A picture where only the upper and lower 40% are captured isn't very useful either - and using only one side would result in only about 25% of all cells used (*4), thus making a 16 KiBit RAM-CCD no more useful than a 1 KiB one.



    This is BTW also the reason why chip designers used a physical structure of two blocks with 128x64 cells each - thus the wiring, to reach each cell, for a 16 KiBit RAM wasn't more complex and space consuming than for a 4 KiBit.



    *3 - The 64x64 array isn't as square as it seems, but almost 16:9 ... did they plan ahead for HDTV?



    *4 - 31% with a 4:3 picture format. Then again, with some lens tricks the entire half might be used - though 128x64 is a weird resolution, isn't it?







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited 6 hours ago









    Alex Hajnal

    3,22831131




    3,22831131










    answered 9 hours ago









    Raffzahn

    42.1k595171




    42.1k595171








    • 1




      The *1 link is particularly interesting as it goes into the theory of operation and covers some of the low-level implementation details.
      – Alex Hajnal
      7 hours ago














    • 1




      The *1 link is particularly interesting as it goes into the theory of operation and covers some of the low-level implementation details.
      – Alex Hajnal
      7 hours ago








    1




    1




    The *1 link is particularly interesting as it goes into the theory of operation and covers some of the low-level implementation details.
    – Alex Hajnal
    7 hours ago




    The *1 link is particularly interesting as it goes into the theory of operation and covers some of the low-level implementation details.
    – Alex Hajnal
    7 hours ago










    up vote
    0
    down vote













    We tried it in the lab, circa 1984.



    I worked with a hardware team and somewhere they'd read an article, the gist of which was something like:




    • write all 1s to the DRAM

    • ensure you don't have any hardware dynamic RAM refresh going on

    • expose it for a given period

    • read the decayed bits back


    I believe that we ended up having to write 1s or 0s depending on the bit position as some of the RAM bits were inverted.



    Sadly we never tried it with a lens, but I definitely remember we showed it was light-sensitive, and fiddly.



    We did it on a single board computer our company designed, which was a 160 x 100 mm, 6809 CPU with 64 Kbyte DRAM. Out of cost-engineering, there was no dynamic RAM refresh circuitry. (Instead we used a non-maskable interrupt to run through enough addresses to keep the DRAM refreshed; from memory it was something like 64 or 128.) We would have written a special test program in EEPROM, with the NMI switched off.



    I believe with did it with US-made milspec chips: I certainly remember we had very few chips in ceramic packaging other than EEPROM and the occasional CPU.






    share|improve this answer










    New contributor




    jonathanjo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.






















      up vote
      0
      down vote













      We tried it in the lab, circa 1984.



      I worked with a hardware team and somewhere they'd read an article, the gist of which was something like:




      • write all 1s to the DRAM

      • ensure you don't have any hardware dynamic RAM refresh going on

      • expose it for a given period

      • read the decayed bits back


      I believe that we ended up having to write 1s or 0s depending on the bit position as some of the RAM bits were inverted.



      Sadly we never tried it with a lens, but I definitely remember we showed it was light-sensitive, and fiddly.



      We did it on a single board computer our company designed, which was a 160 x 100 mm, 6809 CPU with 64 Kbyte DRAM. Out of cost-engineering, there was no dynamic RAM refresh circuitry. (Instead we used a non-maskable interrupt to run through enough addresses to keep the DRAM refreshed; from memory it was something like 64 or 128.) We would have written a special test program in EEPROM, with the NMI switched off.



      I believe with did it with US-made milspec chips: I certainly remember we had very few chips in ceramic packaging other than EEPROM and the occasional CPU.






      share|improve this answer










      New contributor




      jonathanjo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.




















        up vote
        0
        down vote










        up vote
        0
        down vote









        We tried it in the lab, circa 1984.



        I worked with a hardware team and somewhere they'd read an article, the gist of which was something like:




        • write all 1s to the DRAM

        • ensure you don't have any hardware dynamic RAM refresh going on

        • expose it for a given period

        • read the decayed bits back


        I believe that we ended up having to write 1s or 0s depending on the bit position as some of the RAM bits were inverted.



        Sadly we never tried it with a lens, but I definitely remember we showed it was light-sensitive, and fiddly.



        We did it on a single board computer our company designed, which was a 160 x 100 mm, 6809 CPU with 64 Kbyte DRAM. Out of cost-engineering, there was no dynamic RAM refresh circuitry. (Instead we used a non-maskable interrupt to run through enough addresses to keep the DRAM refreshed; from memory it was something like 64 or 128.) We would have written a special test program in EEPROM, with the NMI switched off.



        I believe with did it with US-made milspec chips: I certainly remember we had very few chips in ceramic packaging other than EEPROM and the occasional CPU.






        share|improve this answer










        New contributor




        jonathanjo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        We tried it in the lab, circa 1984.



        I worked with a hardware team and somewhere they'd read an article, the gist of which was something like:




        • write all 1s to the DRAM

        • ensure you don't have any hardware dynamic RAM refresh going on

        • expose it for a given period

        • read the decayed bits back


        I believe that we ended up having to write 1s or 0s depending on the bit position as some of the RAM bits were inverted.



        Sadly we never tried it with a lens, but I definitely remember we showed it was light-sensitive, and fiddly.



        We did it on a single board computer our company designed, which was a 160 x 100 mm, 6809 CPU with 64 Kbyte DRAM. Out of cost-engineering, there was no dynamic RAM refresh circuitry. (Instead we used a non-maskable interrupt to run through enough addresses to keep the DRAM refreshed; from memory it was something like 64 or 128.) We would have written a special test program in EEPROM, with the NMI switched off.



        I believe with did it with US-made milspec chips: I certainly remember we had very few chips in ceramic packaging other than EEPROM and the occasional CPU.







        share|improve this answer










        New contributor




        jonathanjo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        share|improve this answer



        share|improve this answer








        edited 1 hour ago





















        New contributor




        jonathanjo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        answered 1 hour ago









        jonathanjo

        1012




        1012




        New contributor




        jonathanjo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.





        New contributor





        jonathanjo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.






        jonathanjo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.






























             

            draft saved


            draft discarded



















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f8328%2fusing-dram-as-a-camera-sensor%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Can a sorcerer learn a 5th-level spell early by creating spell slots using the Font of Magic feature?

            Does disintegrating a polymorphed enemy still kill it after the 2018 errata?

            A Topological Invariant for $pi_3(U(n))$