How is graphics RAM different from system RAM?











up vote
59
down vote

favorite
8












I know that a GPU and a CPU are fundamentally different things and why they both suck at doing the other's job. But what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.



As I understand it, they're both just different types of DRAM, but it seems to me that the differences could be abstracted away by the memory controller baked into CPU and GPU silicon. The current standard for system RAM is DDR4, but video cards were using GDDR4 for years before DDR4 became a thing for desktops. Video cards are now shipping with HBM RAM (GDDR5?), which is faster than DDR4 system memory.



Why aren't we using the same kind of RAM for both? What makes them different?










share|improve this question






















  • what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards. - they're not. GDDR5 is basically DDR3 optimized for bandwidth (at the expense of latency), if it was up to me, GDDR5 would have been named GDDR3.
    – hanshenrik
    2 days ago

















up vote
59
down vote

favorite
8












I know that a GPU and a CPU are fundamentally different things and why they both suck at doing the other's job. But what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.



As I understand it, they're both just different types of DRAM, but it seems to me that the differences could be abstracted away by the memory controller baked into CPU and GPU silicon. The current standard for system RAM is DDR4, but video cards were using GDDR4 for years before DDR4 became a thing for desktops. Video cards are now shipping with HBM RAM (GDDR5?), which is faster than DDR4 system memory.



Why aren't we using the same kind of RAM for both? What makes them different?










share|improve this question






















  • what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards. - they're not. GDDR5 is basically DDR3 optimized for bandwidth (at the expense of latency), if it was up to me, GDDR5 would have been named GDDR3.
    – hanshenrik
    2 days ago















up vote
59
down vote

favorite
8









up vote
59
down vote

favorite
8






8





I know that a GPU and a CPU are fundamentally different things and why they both suck at doing the other's job. But what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.



As I understand it, they're both just different types of DRAM, but it seems to me that the differences could be abstracted away by the memory controller baked into CPU and GPU silicon. The current standard for system RAM is DDR4, but video cards were using GDDR4 for years before DDR4 became a thing for desktops. Video cards are now shipping with HBM RAM (GDDR5?), which is faster than DDR4 system memory.



Why aren't we using the same kind of RAM for both? What makes them different?










share|improve this question













I know that a GPU and a CPU are fundamentally different things and why they both suck at doing the other's job. But what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.



As I understand it, they're both just different types of DRAM, but it seems to me that the differences could be abstracted away by the memory controller baked into CPU and GPU silicon. The current standard for system RAM is DDR4, but video cards were using GDDR4 for years before DDR4 became a thing for desktops. Video cards are now shipping with HBM RAM (GDDR5?), which is faster than DDR4 system memory.



Why aren't we using the same kind of RAM for both? What makes them different?







memory graphics-card cpu






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 16 at 0:50









Wes Sayeed

10.6k32756




10.6k32756












  • what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards. - they're not. GDDR5 is basically DDR3 optimized for bandwidth (at the expense of latency), if it was up to me, GDDR5 would have been named GDDR3.
    – hanshenrik
    2 days ago




















  • what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards. - they're not. GDDR5 is basically DDR3 optimized for bandwidth (at the expense of latency), if it was up to me, GDDR5 would have been named GDDR3.
    – hanshenrik
    2 days ago


















what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards. - they're not. GDDR5 is basically DDR3 optimized for bandwidth (at the expense of latency), if it was up to me, GDDR5 would have been named GDDR3.
– hanshenrik
2 days ago






what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards. - they're not. GDDR5 is basically DDR3 optimized for bandwidth (at the expense of latency), if it was up to me, GDDR5 would have been named GDDR3.
– hanshenrik
2 days ago












3 Answers
3






active

oldest

votes

















up vote
62
down vote














But what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.




The GDDR specification, while based on the DDR standard, has its own hardware specification. The DDR specification is technically ahead of the GDDR specification, since GDDR is based on the previous DDR specification (most of the time, except when it's based on the previous GDDR specification).



One of the reasons there is a false belief that GDDR is ahead of DDR is that there have been multiple iterations of the GDDR standard that were based on DDR3. This was also the case with GDDR2, being that its specification has design elements from both DDR and DDR2.




However, it is important to note that this GDDR2 memory used on graphics cards is not DDR2 per se, but rather an early midpoint between DDR and DDR2 technologies. Using "DDR2" to refer to GDDR2 is a colloquial misnomer.




Source: DDR2 SDRAM



Likewise, GDDR4 and GDDR5 both took design elements from DDR3. GDDR5 obviously is an improved GDDR design, when compared to GDDR4.




Like its predecessor, GDDR4, GDDR5 is based on DDR3 SDRAM memory, which has double the data lines compared to DDR2 SDRAM. GDDR5 also uses 8-bit wide prefetch buffers similar to GDDR4 and DDR3 SDRAM.




Source: GDDR5 SDRAM




As I understand it, they're both just different types of SDRAM, but it seems to me that the differences could be abstracted away by the memory controller baked into CPU and GPU silicon.




The two standards are actually vastly different. The difference in the number of bits that can be transferred over a data line is one of those differences. The GDDR specification is not compatible with Intel and AMD x86 processors. The GDDR specification is able to transfer more bits, due to it being connected to an entirely different connection, mainly PCI-e (within the specification of the various revisions of this standard).




The current standard for system RAM is DDR4, but video cards were using GDDR4 for years before DDR4 became a thing for desktops.




This is due to the fact that GDDR4 is based off the DDR3 specification, not the DDR2 specification. The DDR3 standard wasn't ratified until 2005. We didn't see products until 2007 due to entirely different market needs. GDDR4 was announced in 2005 and didn't see products until 2007. So you can see that while they have different names, the actual products were released together.




  • GDDR4 SDRAM

  • DDR3 SDRAM



Video cards are now shipping with HBM RAM (GDDR5?), which is faster than DDR4 system memory.




The current GDDR standard(s) are actually GDDR5X and GDDR6. HBM (High Bandwidth Memory) is a Hynix and Samsung DDR manufacturing process.




Why aren't we using the same kind of RAM for both?




The two standards are not compatible with one another.




What makes them different?




What makes them different is their manufacturing process and their specifications. While GDDR is based off the DDR specification, GDDR is not actually ahead of DDR, although there huge performance gaps between the two standards at this point due to the available bandwidth that GDDR has access to.






share|improve this answer























  • @pbfy0 - The feedback I am getting isn't productive though. I am being told another answer is better than my answer.
    – Ramhound
    27 mins ago












  • @pbfy0 comments are supposed to be transitory but constructive information on how to improve posts. Once they have outlived their usefulness then they are subject to deletion. Unconstructive comments will likely end up being purged sooner rather than later.
    – Mokubai
    26 mins ago












  • I added my original comment both to explain my downvote (as the site prompts), and to add additional information that I wasn't completely sure of but could be verified by other community members. I do not believe that that comment qualifies as unconstructuve.
    – pbfy0
    10 mins ago










  • @pbfy0 - Your feedback wasn't the problem. I just flagged everything. I was done with being told, another answer was better than mine, downvote and move on in that case.
    – Ramhound
    2 mins ago




















up vote
42
down vote













The underlying tech is more or less the same, GPUs just leverage a much wider memory bus.



GPUs are easier to design this way as a single unit where many memory modules can be directly connected to the processing unit through a custom circuit board. This allows for a very wide memory bus, often exceeding 256-bits. HBM takes this further with with a 1024-bit bus.



CPUs rely on a much more generalized architecture of sockets and motherboard specifications, so more than the standard two 64-bit channels is typically reserved to the high-end and server market.



It should also be mentioned that GPU memory is tuned to trade latency performance for high bandwidth - lots of shoveling and not a lot of seeking. This is not the case with CPU memory where low latency is desired for good random access speeds.






share|improve this answer








New contributor




Robert is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.

























    up vote
    3
    down vote













    One special feature of some types of graphics RAM is that they can be accessed by two independent or mostly independent bus systems - which makes using them as either framebuffers (the portion of video ram where the pixels sent to the screen every 1/60th or so second are kept) or texture buffers easier and doable with less access conflicts and overhead....






    share|improve this answer





















      Your Answer








      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "3"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














       

      draft saved


      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1375854%2fhow-is-graphics-ram-different-from-system-ram%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      62
      down vote














      But what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.




      The GDDR specification, while based on the DDR standard, has its own hardware specification. The DDR specification is technically ahead of the GDDR specification, since GDDR is based on the previous DDR specification (most of the time, except when it's based on the previous GDDR specification).



      One of the reasons there is a false belief that GDDR is ahead of DDR is that there have been multiple iterations of the GDDR standard that were based on DDR3. This was also the case with GDDR2, being that its specification has design elements from both DDR and DDR2.




      However, it is important to note that this GDDR2 memory used on graphics cards is not DDR2 per se, but rather an early midpoint between DDR and DDR2 technologies. Using "DDR2" to refer to GDDR2 is a colloquial misnomer.




      Source: DDR2 SDRAM



      Likewise, GDDR4 and GDDR5 both took design elements from DDR3. GDDR5 obviously is an improved GDDR design, when compared to GDDR4.




      Like its predecessor, GDDR4, GDDR5 is based on DDR3 SDRAM memory, which has double the data lines compared to DDR2 SDRAM. GDDR5 also uses 8-bit wide prefetch buffers similar to GDDR4 and DDR3 SDRAM.




      Source: GDDR5 SDRAM




      As I understand it, they're both just different types of SDRAM, but it seems to me that the differences could be abstracted away by the memory controller baked into CPU and GPU silicon.




      The two standards are actually vastly different. The difference in the number of bits that can be transferred over a data line is one of those differences. The GDDR specification is not compatible with Intel and AMD x86 processors. The GDDR specification is able to transfer more bits, due to it being connected to an entirely different connection, mainly PCI-e (within the specification of the various revisions of this standard).




      The current standard for system RAM is DDR4, but video cards were using GDDR4 for years before DDR4 became a thing for desktops.




      This is due to the fact that GDDR4 is based off the DDR3 specification, not the DDR2 specification. The DDR3 standard wasn't ratified until 2005. We didn't see products until 2007 due to entirely different market needs. GDDR4 was announced in 2005 and didn't see products until 2007. So you can see that while they have different names, the actual products were released together.




      • GDDR4 SDRAM

      • DDR3 SDRAM



      Video cards are now shipping with HBM RAM (GDDR5?), which is faster than DDR4 system memory.




      The current GDDR standard(s) are actually GDDR5X and GDDR6. HBM (High Bandwidth Memory) is a Hynix and Samsung DDR manufacturing process.




      Why aren't we using the same kind of RAM for both?




      The two standards are not compatible with one another.




      What makes them different?




      What makes them different is their manufacturing process and their specifications. While GDDR is based off the DDR specification, GDDR is not actually ahead of DDR, although there huge performance gaps between the two standards at this point due to the available bandwidth that GDDR has access to.






      share|improve this answer























      • @pbfy0 - The feedback I am getting isn't productive though. I am being told another answer is better than my answer.
        – Ramhound
        27 mins ago












      • @pbfy0 comments are supposed to be transitory but constructive information on how to improve posts. Once they have outlived their usefulness then they are subject to deletion. Unconstructive comments will likely end up being purged sooner rather than later.
        – Mokubai
        26 mins ago












      • I added my original comment both to explain my downvote (as the site prompts), and to add additional information that I wasn't completely sure of but could be verified by other community members. I do not believe that that comment qualifies as unconstructuve.
        – pbfy0
        10 mins ago










      • @pbfy0 - Your feedback wasn't the problem. I just flagged everything. I was done with being told, another answer was better than mine, downvote and move on in that case.
        – Ramhound
        2 mins ago

















      up vote
      62
      down vote














      But what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.




      The GDDR specification, while based on the DDR standard, has its own hardware specification. The DDR specification is technically ahead of the GDDR specification, since GDDR is based on the previous DDR specification (most of the time, except when it's based on the previous GDDR specification).



      One of the reasons there is a false belief that GDDR is ahead of DDR is that there have been multiple iterations of the GDDR standard that were based on DDR3. This was also the case with GDDR2, being that its specification has design elements from both DDR and DDR2.




      However, it is important to note that this GDDR2 memory used on graphics cards is not DDR2 per se, but rather an early midpoint between DDR and DDR2 technologies. Using "DDR2" to refer to GDDR2 is a colloquial misnomer.




      Source: DDR2 SDRAM



      Likewise, GDDR4 and GDDR5 both took design elements from DDR3. GDDR5 obviously is an improved GDDR design, when compared to GDDR4.




      Like its predecessor, GDDR4, GDDR5 is based on DDR3 SDRAM memory, which has double the data lines compared to DDR2 SDRAM. GDDR5 also uses 8-bit wide prefetch buffers similar to GDDR4 and DDR3 SDRAM.




      Source: GDDR5 SDRAM




      As I understand it, they're both just different types of SDRAM, but it seems to me that the differences could be abstracted away by the memory controller baked into CPU and GPU silicon.




      The two standards are actually vastly different. The difference in the number of bits that can be transferred over a data line is one of those differences. The GDDR specification is not compatible with Intel and AMD x86 processors. The GDDR specification is able to transfer more bits, due to it being connected to an entirely different connection, mainly PCI-e (within the specification of the various revisions of this standard).




      The current standard for system RAM is DDR4, but video cards were using GDDR4 for years before DDR4 became a thing for desktops.




      This is due to the fact that GDDR4 is based off the DDR3 specification, not the DDR2 specification. The DDR3 standard wasn't ratified until 2005. We didn't see products until 2007 due to entirely different market needs. GDDR4 was announced in 2005 and didn't see products until 2007. So you can see that while they have different names, the actual products were released together.




      • GDDR4 SDRAM

      • DDR3 SDRAM



      Video cards are now shipping with HBM RAM (GDDR5?), which is faster than DDR4 system memory.




      The current GDDR standard(s) are actually GDDR5X and GDDR6. HBM (High Bandwidth Memory) is a Hynix and Samsung DDR manufacturing process.




      Why aren't we using the same kind of RAM for both?




      The two standards are not compatible with one another.




      What makes them different?




      What makes them different is their manufacturing process and their specifications. While GDDR is based off the DDR specification, GDDR is not actually ahead of DDR, although there huge performance gaps between the two standards at this point due to the available bandwidth that GDDR has access to.






      share|improve this answer























      • @pbfy0 - The feedback I am getting isn't productive though. I am being told another answer is better than my answer.
        – Ramhound
        27 mins ago












      • @pbfy0 comments are supposed to be transitory but constructive information on how to improve posts. Once they have outlived their usefulness then they are subject to deletion. Unconstructive comments will likely end up being purged sooner rather than later.
        – Mokubai
        26 mins ago












      • I added my original comment both to explain my downvote (as the site prompts), and to add additional information that I wasn't completely sure of but could be verified by other community members. I do not believe that that comment qualifies as unconstructuve.
        – pbfy0
        10 mins ago










      • @pbfy0 - Your feedback wasn't the problem. I just flagged everything. I was done with being told, another answer was better than mine, downvote and move on in that case.
        – Ramhound
        2 mins ago















      up vote
      62
      down vote










      up vote
      62
      down vote










      But what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.




      The GDDR specification, while based on the DDR standard, has its own hardware specification. The DDR specification is technically ahead of the GDDR specification, since GDDR is based on the previous DDR specification (most of the time, except when it's based on the previous GDDR specification).



      One of the reasons there is a false belief that GDDR is ahead of DDR is that there have been multiple iterations of the GDDR standard that were based on DDR3. This was also the case with GDDR2, being that its specification has design elements from both DDR and DDR2.




      However, it is important to note that this GDDR2 memory used on graphics cards is not DDR2 per se, but rather an early midpoint between DDR and DDR2 technologies. Using "DDR2" to refer to GDDR2 is a colloquial misnomer.




      Source: DDR2 SDRAM



      Likewise, GDDR4 and GDDR5 both took design elements from DDR3. GDDR5 obviously is an improved GDDR design, when compared to GDDR4.




      Like its predecessor, GDDR4, GDDR5 is based on DDR3 SDRAM memory, which has double the data lines compared to DDR2 SDRAM. GDDR5 also uses 8-bit wide prefetch buffers similar to GDDR4 and DDR3 SDRAM.




      Source: GDDR5 SDRAM




      As I understand it, they're both just different types of SDRAM, but it seems to me that the differences could be abstracted away by the memory controller baked into CPU and GPU silicon.




      The two standards are actually vastly different. The difference in the number of bits that can be transferred over a data line is one of those differences. The GDDR specification is not compatible with Intel and AMD x86 processors. The GDDR specification is able to transfer more bits, due to it being connected to an entirely different connection, mainly PCI-e (within the specification of the various revisions of this standard).




      The current standard for system RAM is DDR4, but video cards were using GDDR4 for years before DDR4 became a thing for desktops.




      This is due to the fact that GDDR4 is based off the DDR3 specification, not the DDR2 specification. The DDR3 standard wasn't ratified until 2005. We didn't see products until 2007 due to entirely different market needs. GDDR4 was announced in 2005 and didn't see products until 2007. So you can see that while they have different names, the actual products were released together.




      • GDDR4 SDRAM

      • DDR3 SDRAM



      Video cards are now shipping with HBM RAM (GDDR5?), which is faster than DDR4 system memory.




      The current GDDR standard(s) are actually GDDR5X and GDDR6. HBM (High Bandwidth Memory) is a Hynix and Samsung DDR manufacturing process.




      Why aren't we using the same kind of RAM for both?




      The two standards are not compatible with one another.




      What makes them different?




      What makes them different is their manufacturing process and their specifications. While GDDR is based off the DDR specification, GDDR is not actually ahead of DDR, although there huge performance gaps between the two standards at this point due to the available bandwidth that GDDR has access to.






      share|improve this answer















      But what I don't get is why standard system RAM has always been a generation behind the RAM used on video cards.




      The GDDR specification, while based on the DDR standard, has its own hardware specification. The DDR specification is technically ahead of the GDDR specification, since GDDR is based on the previous DDR specification (most of the time, except when it's based on the previous GDDR specification).



      One of the reasons there is a false belief that GDDR is ahead of DDR is that there have been multiple iterations of the GDDR standard that were based on DDR3. This was also the case with GDDR2, being that its specification has design elements from both DDR and DDR2.




      However, it is important to note that this GDDR2 memory used on graphics cards is not DDR2 per se, but rather an early midpoint between DDR and DDR2 technologies. Using "DDR2" to refer to GDDR2 is a colloquial misnomer.




      Source: DDR2 SDRAM



      Likewise, GDDR4 and GDDR5 both took design elements from DDR3. GDDR5 obviously is an improved GDDR design, when compared to GDDR4.




      Like its predecessor, GDDR4, GDDR5 is based on DDR3 SDRAM memory, which has double the data lines compared to DDR2 SDRAM. GDDR5 also uses 8-bit wide prefetch buffers similar to GDDR4 and DDR3 SDRAM.




      Source: GDDR5 SDRAM




      As I understand it, they're both just different types of SDRAM, but it seems to me that the differences could be abstracted away by the memory controller baked into CPU and GPU silicon.




      The two standards are actually vastly different. The difference in the number of bits that can be transferred over a data line is one of those differences. The GDDR specification is not compatible with Intel and AMD x86 processors. The GDDR specification is able to transfer more bits, due to it being connected to an entirely different connection, mainly PCI-e (within the specification of the various revisions of this standard).




      The current standard for system RAM is DDR4, but video cards were using GDDR4 for years before DDR4 became a thing for desktops.




      This is due to the fact that GDDR4 is based off the DDR3 specification, not the DDR2 specification. The DDR3 standard wasn't ratified until 2005. We didn't see products until 2007 due to entirely different market needs. GDDR4 was announced in 2005 and didn't see products until 2007. So you can see that while they have different names, the actual products were released together.




      • GDDR4 SDRAM

      • DDR3 SDRAM



      Video cards are now shipping with HBM RAM (GDDR5?), which is faster than DDR4 system memory.




      The current GDDR standard(s) are actually GDDR5X and GDDR6. HBM (High Bandwidth Memory) is a Hynix and Samsung DDR manufacturing process.




      Why aren't we using the same kind of RAM for both?




      The two standards are not compatible with one another.




      What makes them different?




      What makes them different is their manufacturing process and their specifications. While GDDR is based off the DDR specification, GDDR is not actually ahead of DDR, although there huge performance gaps between the two standards at this point due to the available bandwidth that GDDR has access to.







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited Nov 16 at 12:33









      psmears

      45238




      45238










      answered Nov 16 at 1:27









      Ramhound

      19.1k156082




      19.1k156082












      • @pbfy0 - The feedback I am getting isn't productive though. I am being told another answer is better than my answer.
        – Ramhound
        27 mins ago












      • @pbfy0 comments are supposed to be transitory but constructive information on how to improve posts. Once they have outlived their usefulness then they are subject to deletion. Unconstructive comments will likely end up being purged sooner rather than later.
        – Mokubai
        26 mins ago












      • I added my original comment both to explain my downvote (as the site prompts), and to add additional information that I wasn't completely sure of but could be verified by other community members. I do not believe that that comment qualifies as unconstructuve.
        – pbfy0
        10 mins ago










      • @pbfy0 - Your feedback wasn't the problem. I just flagged everything. I was done with being told, another answer was better than mine, downvote and move on in that case.
        – Ramhound
        2 mins ago




















      • @pbfy0 - The feedback I am getting isn't productive though. I am being told another answer is better than my answer.
        – Ramhound
        27 mins ago












      • @pbfy0 comments are supposed to be transitory but constructive information on how to improve posts. Once they have outlived their usefulness then they are subject to deletion. Unconstructive comments will likely end up being purged sooner rather than later.
        – Mokubai
        26 mins ago












      • I added my original comment both to explain my downvote (as the site prompts), and to add additional information that I wasn't completely sure of but could be verified by other community members. I do not believe that that comment qualifies as unconstructuve.
        – pbfy0
        10 mins ago










      • @pbfy0 - Your feedback wasn't the problem. I just flagged everything. I was done with being told, another answer was better than mine, downvote and move on in that case.
        – Ramhound
        2 mins ago


















      @pbfy0 - The feedback I am getting isn't productive though. I am being told another answer is better than my answer.
      – Ramhound
      27 mins ago






      @pbfy0 - The feedback I am getting isn't productive though. I am being told another answer is better than my answer.
      – Ramhound
      27 mins ago














      @pbfy0 comments are supposed to be transitory but constructive information on how to improve posts. Once they have outlived their usefulness then they are subject to deletion. Unconstructive comments will likely end up being purged sooner rather than later.
      – Mokubai
      26 mins ago






      @pbfy0 comments are supposed to be transitory but constructive information on how to improve posts. Once they have outlived their usefulness then they are subject to deletion. Unconstructive comments will likely end up being purged sooner rather than later.
      – Mokubai
      26 mins ago














      I added my original comment both to explain my downvote (as the site prompts), and to add additional information that I wasn't completely sure of but could be verified by other community members. I do not believe that that comment qualifies as unconstructuve.
      – pbfy0
      10 mins ago




      I added my original comment both to explain my downvote (as the site prompts), and to add additional information that I wasn't completely sure of but could be verified by other community members. I do not believe that that comment qualifies as unconstructuve.
      – pbfy0
      10 mins ago












      @pbfy0 - Your feedback wasn't the problem. I just flagged everything. I was done with being told, another answer was better than mine, downvote and move on in that case.
      – Ramhound
      2 mins ago






      @pbfy0 - Your feedback wasn't the problem. I just flagged everything. I was done with being told, another answer was better than mine, downvote and move on in that case.
      – Ramhound
      2 mins ago














      up vote
      42
      down vote













      The underlying tech is more or less the same, GPUs just leverage a much wider memory bus.



      GPUs are easier to design this way as a single unit where many memory modules can be directly connected to the processing unit through a custom circuit board. This allows for a very wide memory bus, often exceeding 256-bits. HBM takes this further with with a 1024-bit bus.



      CPUs rely on a much more generalized architecture of sockets and motherboard specifications, so more than the standard two 64-bit channels is typically reserved to the high-end and server market.



      It should also be mentioned that GPU memory is tuned to trade latency performance for high bandwidth - lots of shoveling and not a lot of seeking. This is not the case with CPU memory where low latency is desired for good random access speeds.






      share|improve this answer








      New contributor




      Robert is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






















        up vote
        42
        down vote













        The underlying tech is more or less the same, GPUs just leverage a much wider memory bus.



        GPUs are easier to design this way as a single unit where many memory modules can be directly connected to the processing unit through a custom circuit board. This allows for a very wide memory bus, often exceeding 256-bits. HBM takes this further with with a 1024-bit bus.



        CPUs rely on a much more generalized architecture of sockets and motherboard specifications, so more than the standard two 64-bit channels is typically reserved to the high-end and server market.



        It should also be mentioned that GPU memory is tuned to trade latency performance for high bandwidth - lots of shoveling and not a lot of seeking. This is not the case with CPU memory where low latency is desired for good random access speeds.






        share|improve this answer








        New contributor




        Robert is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.




















          up vote
          42
          down vote










          up vote
          42
          down vote









          The underlying tech is more or less the same, GPUs just leverage a much wider memory bus.



          GPUs are easier to design this way as a single unit where many memory modules can be directly connected to the processing unit through a custom circuit board. This allows for a very wide memory bus, often exceeding 256-bits. HBM takes this further with with a 1024-bit bus.



          CPUs rely on a much more generalized architecture of sockets and motherboard specifications, so more than the standard two 64-bit channels is typically reserved to the high-end and server market.



          It should also be mentioned that GPU memory is tuned to trade latency performance for high bandwidth - lots of shoveling and not a lot of seeking. This is not the case with CPU memory where low latency is desired for good random access speeds.






          share|improve this answer








          New contributor




          Robert is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.









          The underlying tech is more or less the same, GPUs just leverage a much wider memory bus.



          GPUs are easier to design this way as a single unit where many memory modules can be directly connected to the processing unit through a custom circuit board. This allows for a very wide memory bus, often exceeding 256-bits. HBM takes this further with with a 1024-bit bus.



          CPUs rely on a much more generalized architecture of sockets and motherboard specifications, so more than the standard two 64-bit channels is typically reserved to the high-end and server market.



          It should also be mentioned that GPU memory is tuned to trade latency performance for high bandwidth - lots of shoveling and not a lot of seeking. This is not the case with CPU memory where low latency is desired for good random access speeds.







          share|improve this answer








          New contributor




          Robert is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.









          share|improve this answer



          share|improve this answer






          New contributor




          Robert is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.









          answered Nov 16 at 3:46









          Robert

          40913




          40913




          New contributor




          Robert is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.





          New contributor





          Robert is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.






          Robert is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.






















              up vote
              3
              down vote













              One special feature of some types of graphics RAM is that they can be accessed by two independent or mostly independent bus systems - which makes using them as either framebuffers (the portion of video ram where the pixels sent to the screen every 1/60th or so second are kept) or texture buffers easier and doable with less access conflicts and overhead....






              share|improve this answer

























                up vote
                3
                down vote













                One special feature of some types of graphics RAM is that they can be accessed by two independent or mostly independent bus systems - which makes using them as either framebuffers (the portion of video ram where the pixels sent to the screen every 1/60th or so second are kept) or texture buffers easier and doable with less access conflicts and overhead....






                share|improve this answer























                  up vote
                  3
                  down vote










                  up vote
                  3
                  down vote









                  One special feature of some types of graphics RAM is that they can be accessed by two independent or mostly independent bus systems - which makes using them as either framebuffers (the portion of video ram where the pixels sent to the screen every 1/60th or so second are kept) or texture buffers easier and doable with less access conflicts and overhead....






                  share|improve this answer












                  One special feature of some types of graphics RAM is that they can be accessed by two independent or mostly independent bus systems - which makes using them as either framebuffers (the portion of video ram where the pixels sent to the screen every 1/60th or so second are kept) or texture buffers easier and doable with less access conflicts and overhead....







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered 2 days ago









                  rackandboneman

                  67036




                  67036






























                       

                      draft saved


                      draft discarded



















































                       


                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1375854%2fhow-is-graphics-ram-different-from-system-ram%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Can a sorcerer learn a 5th-level spell early by creating spell slots using the Font of Magic feature?

                      Does disintegrating a polymorphed enemy still kill it after the 2018 errata?

                      A Topological Invariant for $pi_3(U(n))$