*(decimal*)d=XXXm results in another output than BinaryWriter.Write(XXXm)












4















I'm writing an optimized binary reader/writer for learning purposes by myself. Everything works fine, until I wrote the tests for the en- and decoding of decimals. My tests also include if the BinaryWriter of the .NET Framework produces compatible output to my BinaryWriter and vice versa.



I'm mostly using unsafe and pointers to write my variables into byte-arrays. Those are the results, when writing a decimal via pointers and via the BinaryWriter:



BinaryWriter....: E9 A8 94 23 9B CA 4E 44 63 C5 44 39 00 00 1A 00
unsafe *decimal=: 00 00 1A 00 63 C5 44 39 E9 A8 94 23 9B CA 4E 44


My code writing a decimal looks like this:



unsafe
{
byte data = new byte[16];

fixed (byte* pData = data)
*(decimal*)pData = 177.237846528973465289734658334m;
}


And using BinaryWriter of .NET Framework looks like this:



using (MemoryStream ms = new MemoryStream())
{
using (BinaryWriter writer = new BinaryWriter(ms))
writer.Write(177.237846528973465289734658334m);

ms.ToArray();
}


Microsoft made their BinaryWriter incompatible to the way decimals are stored in memory. By looking into the referencesource we see that Microsoft uses an internal method called GetBytes, which means that the output of GetBytes is incompatible to the way decimals are stored in memory.



Is there a reason why Microsoft implemented writing decimals in this way? May it be dangerous to use the way with unsafe to implement own binary formats or protocols because the internal layout of decimals may change in the future?



Using the unsafe way performs quite better than using GetBytes called by the BinaryWriter.










share|improve this question

























  • Possible duplicate of Are the raw bytes written by .NET System.IO.BinaryWriter readable by other platforms?

    – GSerg
    Jan 2 at 14:51






  • 1





    @GSerg To be fair that referenced answer only says it's a .NET specific format. It doesn't answer the questions of OP really.

    – Neijwiert
    Jan 2 at 14:55











  • @Neijwiert Well, no one can answer the OP's question of whether Microsoft will ever feel like changing this format. We can only speculate that it would be highly unlikely for compatibility reasons.

    – GSerg
    Jan 2 at 14:56






  • 1





    @GSerg True, but I was kind of hoping for somebody to explain how the current implementation is done, as I cannot.

    – Neijwiert
    Jan 2 at 14:58











  • @Neijwiert See the second answer. decimal --> decimal.GetBytes(), 16 bytes, should see the System.Decimal class code. It's a typo though, should be GetBits().

    – GSerg
    Jan 2 at 14:58


















4















I'm writing an optimized binary reader/writer for learning purposes by myself. Everything works fine, until I wrote the tests for the en- and decoding of decimals. My tests also include if the BinaryWriter of the .NET Framework produces compatible output to my BinaryWriter and vice versa.



I'm mostly using unsafe and pointers to write my variables into byte-arrays. Those are the results, when writing a decimal via pointers and via the BinaryWriter:



BinaryWriter....: E9 A8 94 23 9B CA 4E 44 63 C5 44 39 00 00 1A 00
unsafe *decimal=: 00 00 1A 00 63 C5 44 39 E9 A8 94 23 9B CA 4E 44


My code writing a decimal looks like this:



unsafe
{
byte data = new byte[16];

fixed (byte* pData = data)
*(decimal*)pData = 177.237846528973465289734658334m;
}


And using BinaryWriter of .NET Framework looks like this:



using (MemoryStream ms = new MemoryStream())
{
using (BinaryWriter writer = new BinaryWriter(ms))
writer.Write(177.237846528973465289734658334m);

ms.ToArray();
}


Microsoft made their BinaryWriter incompatible to the way decimals are stored in memory. By looking into the referencesource we see that Microsoft uses an internal method called GetBytes, which means that the output of GetBytes is incompatible to the way decimals are stored in memory.



Is there a reason why Microsoft implemented writing decimals in this way? May it be dangerous to use the way with unsafe to implement own binary formats or protocols because the internal layout of decimals may change in the future?



Using the unsafe way performs quite better than using GetBytes called by the BinaryWriter.










share|improve this question

























  • Possible duplicate of Are the raw bytes written by .NET System.IO.BinaryWriter readable by other platforms?

    – GSerg
    Jan 2 at 14:51






  • 1





    @GSerg To be fair that referenced answer only says it's a .NET specific format. It doesn't answer the questions of OP really.

    – Neijwiert
    Jan 2 at 14:55











  • @Neijwiert Well, no one can answer the OP's question of whether Microsoft will ever feel like changing this format. We can only speculate that it would be highly unlikely for compatibility reasons.

    – GSerg
    Jan 2 at 14:56






  • 1





    @GSerg True, but I was kind of hoping for somebody to explain how the current implementation is done, as I cannot.

    – Neijwiert
    Jan 2 at 14:58











  • @Neijwiert See the second answer. decimal --> decimal.GetBytes(), 16 bytes, should see the System.Decimal class code. It's a typo though, should be GetBits().

    – GSerg
    Jan 2 at 14:58
















4












4








4








I'm writing an optimized binary reader/writer for learning purposes by myself. Everything works fine, until I wrote the tests for the en- and decoding of decimals. My tests also include if the BinaryWriter of the .NET Framework produces compatible output to my BinaryWriter and vice versa.



I'm mostly using unsafe and pointers to write my variables into byte-arrays. Those are the results, when writing a decimal via pointers and via the BinaryWriter:



BinaryWriter....: E9 A8 94 23 9B CA 4E 44 63 C5 44 39 00 00 1A 00
unsafe *decimal=: 00 00 1A 00 63 C5 44 39 E9 A8 94 23 9B CA 4E 44


My code writing a decimal looks like this:



unsafe
{
byte data = new byte[16];

fixed (byte* pData = data)
*(decimal*)pData = 177.237846528973465289734658334m;
}


And using BinaryWriter of .NET Framework looks like this:



using (MemoryStream ms = new MemoryStream())
{
using (BinaryWriter writer = new BinaryWriter(ms))
writer.Write(177.237846528973465289734658334m);

ms.ToArray();
}


Microsoft made their BinaryWriter incompatible to the way decimals are stored in memory. By looking into the referencesource we see that Microsoft uses an internal method called GetBytes, which means that the output of GetBytes is incompatible to the way decimals are stored in memory.



Is there a reason why Microsoft implemented writing decimals in this way? May it be dangerous to use the way with unsafe to implement own binary formats or protocols because the internal layout of decimals may change in the future?



Using the unsafe way performs quite better than using GetBytes called by the BinaryWriter.










share|improve this question
















I'm writing an optimized binary reader/writer for learning purposes by myself. Everything works fine, until I wrote the tests for the en- and decoding of decimals. My tests also include if the BinaryWriter of the .NET Framework produces compatible output to my BinaryWriter and vice versa.



I'm mostly using unsafe and pointers to write my variables into byte-arrays. Those are the results, when writing a decimal via pointers and via the BinaryWriter:



BinaryWriter....: E9 A8 94 23 9B CA 4E 44 63 C5 44 39 00 00 1A 00
unsafe *decimal=: 00 00 1A 00 63 C5 44 39 E9 A8 94 23 9B CA 4E 44


My code writing a decimal looks like this:



unsafe
{
byte data = new byte[16];

fixed (byte* pData = data)
*(decimal*)pData = 177.237846528973465289734658334m;
}


And using BinaryWriter of .NET Framework looks like this:



using (MemoryStream ms = new MemoryStream())
{
using (BinaryWriter writer = new BinaryWriter(ms))
writer.Write(177.237846528973465289734658334m);

ms.ToArray();
}


Microsoft made their BinaryWriter incompatible to the way decimals are stored in memory. By looking into the referencesource we see that Microsoft uses an internal method called GetBytes, which means that the output of GetBytes is incompatible to the way decimals are stored in memory.



Is there a reason why Microsoft implemented writing decimals in this way? May it be dangerous to use the way with unsafe to implement own binary formats or protocols because the internal layout of decimals may change in the future?



Using the unsafe way performs quite better than using GetBytes called by the BinaryWriter.







c# decimal unsafe binarywriter






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jan 2 at 17:38







Scharle

















asked Jan 2 at 14:41









ScharleScharle

655




655













  • Possible duplicate of Are the raw bytes written by .NET System.IO.BinaryWriter readable by other platforms?

    – GSerg
    Jan 2 at 14:51






  • 1





    @GSerg To be fair that referenced answer only says it's a .NET specific format. It doesn't answer the questions of OP really.

    – Neijwiert
    Jan 2 at 14:55











  • @Neijwiert Well, no one can answer the OP's question of whether Microsoft will ever feel like changing this format. We can only speculate that it would be highly unlikely for compatibility reasons.

    – GSerg
    Jan 2 at 14:56






  • 1





    @GSerg True, but I was kind of hoping for somebody to explain how the current implementation is done, as I cannot.

    – Neijwiert
    Jan 2 at 14:58











  • @Neijwiert See the second answer. decimal --> decimal.GetBytes(), 16 bytes, should see the System.Decimal class code. It's a typo though, should be GetBits().

    – GSerg
    Jan 2 at 14:58





















  • Possible duplicate of Are the raw bytes written by .NET System.IO.BinaryWriter readable by other platforms?

    – GSerg
    Jan 2 at 14:51






  • 1





    @GSerg To be fair that referenced answer only says it's a .NET specific format. It doesn't answer the questions of OP really.

    – Neijwiert
    Jan 2 at 14:55











  • @Neijwiert Well, no one can answer the OP's question of whether Microsoft will ever feel like changing this format. We can only speculate that it would be highly unlikely for compatibility reasons.

    – GSerg
    Jan 2 at 14:56






  • 1





    @GSerg True, but I was kind of hoping for somebody to explain how the current implementation is done, as I cannot.

    – Neijwiert
    Jan 2 at 14:58











  • @Neijwiert See the second answer. decimal --> decimal.GetBytes(), 16 bytes, should see the System.Decimal class code. It's a typo though, should be GetBits().

    – GSerg
    Jan 2 at 14:58



















Possible duplicate of Are the raw bytes written by .NET System.IO.BinaryWriter readable by other platforms?

– GSerg
Jan 2 at 14:51





Possible duplicate of Are the raw bytes written by .NET System.IO.BinaryWriter readable by other platforms?

– GSerg
Jan 2 at 14:51




1




1





@GSerg To be fair that referenced answer only says it's a .NET specific format. It doesn't answer the questions of OP really.

– Neijwiert
Jan 2 at 14:55





@GSerg To be fair that referenced answer only says it's a .NET specific format. It doesn't answer the questions of OP really.

– Neijwiert
Jan 2 at 14:55













@Neijwiert Well, no one can answer the OP's question of whether Microsoft will ever feel like changing this format. We can only speculate that it would be highly unlikely for compatibility reasons.

– GSerg
Jan 2 at 14:56





@Neijwiert Well, no one can answer the OP's question of whether Microsoft will ever feel like changing this format. We can only speculate that it would be highly unlikely for compatibility reasons.

– GSerg
Jan 2 at 14:56




1




1





@GSerg True, but I was kind of hoping for somebody to explain how the current implementation is done, as I cannot.

– Neijwiert
Jan 2 at 14:58





@GSerg True, but I was kind of hoping for somebody to explain how the current implementation is done, as I cannot.

– Neijwiert
Jan 2 at 14:58













@Neijwiert See the second answer. decimal --> decimal.GetBytes(), 16 bytes, should see the System.Decimal class code. It's a typo though, should be GetBits().

– GSerg
Jan 2 at 14:58







@Neijwiert See the second answer. decimal --> decimal.GetBytes(), 16 bytes, should see the System.Decimal class code. It's a typo though, should be GetBits().

– GSerg
Jan 2 at 14:58














1 Answer
1






active

oldest

votes


















2














Microsoft itself tried to keep the decimal and the alignment of it's components as steady as possible. You can also see this in the mentioned referencesource of the .NET framework:



// NOTE: Do not change the order in which these fields are declared. The
// native methods in this class rely on this particular order.
private int flags;
private int hi;
private int lo;
private int mid;


Together with the usage of [StructLayout(LayoutKind.Sequential)] the structure gets aligned in exactly that way in the memory.



You get wrong results because of the GetBytes method using the variables which are building the data of the decimal internally not in the order they are aligned in the structure itself:



internal static void GetBytes(Decimal d, byte buffer)
{
Contract.Requires((buffer != null && buffer.Length >= 16), "[GetBytes]buffer != null && buffer.Length >= 16");
buffer[0] = (byte)d.lo;
buffer[1] = (byte)(d.lo >> 8);
buffer[2] = (byte)(d.lo >> 16);
buffer[3] = (byte)(d.lo >> 24);

buffer[4] = (byte)d.mid;
buffer[5] = (byte)(d.mid >> 8);
buffer[6] = (byte)(d.mid >> 16);
buffer[7] = (byte)(d.mid >> 24);

buffer[8] = (byte)d.hi;
buffer[9] = (byte)(d.hi >> 8);
buffer[10] = (byte)(d.hi >> 16);
buffer[11] = (byte)(d.hi >> 24);

buffer[12] = (byte)d.flags;
buffer[13] = (byte)(d.flags >> 8);
buffer[14] = (byte)(d.flags >> 16);
buffer[15] = (byte)(d.flags >> 24);
}


It seems to me that the corresponding .NET developer tried to adapt the format presented by GetBytes to little endian, but made one mistake. He didn't only order the bytes of the components of the decimal but also the components itself. (flags, hi, lo, mid becomes lo, mid, hi, flags.) But little endian layout is only adapted to fields not to whole structs - especially with [StructLayout(LayoutKind.Sequential)].



My advice here is usually to use the methods Microsoft offers in their classes. So I would prefer any GetBytes or GetBits based way to serialize the data than doing it with unsafe because Microsoft will keep the compatibility to the BinaryWriter in any way. However, the comments are kinda serious and I wouldn't expect microsoft to break the .NET framework on this very basic level.



It's hard for me to believe that performance matters that strongly to favour the unsafe way over GetBits. After all we are talking about decimals here. You still can push the int of GetBits via unsafe into your byte.






share|improve this answer


























  • Internal structure of decimal is documented in the Decimal(Int32) constructor. It also explains why decimal.GetBits() returns an array of four ints, in a specific order, and what they mean. I do not see where there would be a bug with incorrect byte order.

    – GSerg
    Jan 2 at 22:17













  • @GSerg As already mentioned in other comments: GetBits() is irrelevant for this because BinaryWriter uses GetBytes() internally. Furthermore the linked documentation doesn't tell why the memory layout of the structure differs from the layout returned by GetBytes() whereas my answer does.

    – Matthias
    Jan 2 at 23:37













  • You have your reasoning backwards. GetBits() is the starting point because it is documented. What it returns cannot change. From this starting point we can see that internal GetBytes() returns the same data as the documented GetBits(), but always in the little endian format (whereas GetBits() will naturally use the system's current endianness) - which makes sense for portability between systems with different endianness. These two methods will not change (that would break ability to load decimals persisted to storage before the hypothetical change).

    – GSerg
    Jan 3 at 8:26











  • The internal order of private fields comprising the decimal structure, on contrary, may easily change at any moment because that would be purely internal to the framework - but in reality it is not going to happen because there is no reason to make such change (e.g. introducing new fields would break GetBits and GetBytes; if that had to be done, they would come up with a new type, e.g. decimal2). But even with that being the case, the strict answer is No, there is no guarantee that what you see by casting decimal* to byte* will not change, and you should not rely on it.

    – GSerg
    Jan 3 at 8:26













Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54008281%2fdecimald-xxxm-results-in-another-output-than-binarywriter-writexxxm%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









2














Microsoft itself tried to keep the decimal and the alignment of it's components as steady as possible. You can also see this in the mentioned referencesource of the .NET framework:



// NOTE: Do not change the order in which these fields are declared. The
// native methods in this class rely on this particular order.
private int flags;
private int hi;
private int lo;
private int mid;


Together with the usage of [StructLayout(LayoutKind.Sequential)] the structure gets aligned in exactly that way in the memory.



You get wrong results because of the GetBytes method using the variables which are building the data of the decimal internally not in the order they are aligned in the structure itself:



internal static void GetBytes(Decimal d, byte buffer)
{
Contract.Requires((buffer != null && buffer.Length >= 16), "[GetBytes]buffer != null && buffer.Length >= 16");
buffer[0] = (byte)d.lo;
buffer[1] = (byte)(d.lo >> 8);
buffer[2] = (byte)(d.lo >> 16);
buffer[3] = (byte)(d.lo >> 24);

buffer[4] = (byte)d.mid;
buffer[5] = (byte)(d.mid >> 8);
buffer[6] = (byte)(d.mid >> 16);
buffer[7] = (byte)(d.mid >> 24);

buffer[8] = (byte)d.hi;
buffer[9] = (byte)(d.hi >> 8);
buffer[10] = (byte)(d.hi >> 16);
buffer[11] = (byte)(d.hi >> 24);

buffer[12] = (byte)d.flags;
buffer[13] = (byte)(d.flags >> 8);
buffer[14] = (byte)(d.flags >> 16);
buffer[15] = (byte)(d.flags >> 24);
}


It seems to me that the corresponding .NET developer tried to adapt the format presented by GetBytes to little endian, but made one mistake. He didn't only order the bytes of the components of the decimal but also the components itself. (flags, hi, lo, mid becomes lo, mid, hi, flags.) But little endian layout is only adapted to fields not to whole structs - especially with [StructLayout(LayoutKind.Sequential)].



My advice here is usually to use the methods Microsoft offers in their classes. So I would prefer any GetBytes or GetBits based way to serialize the data than doing it with unsafe because Microsoft will keep the compatibility to the BinaryWriter in any way. However, the comments are kinda serious and I wouldn't expect microsoft to break the .NET framework on this very basic level.



It's hard for me to believe that performance matters that strongly to favour the unsafe way over GetBits. After all we are talking about decimals here. You still can push the int of GetBits via unsafe into your byte.






share|improve this answer


























  • Internal structure of decimal is documented in the Decimal(Int32) constructor. It also explains why decimal.GetBits() returns an array of four ints, in a specific order, and what they mean. I do not see where there would be a bug with incorrect byte order.

    – GSerg
    Jan 2 at 22:17













  • @GSerg As already mentioned in other comments: GetBits() is irrelevant for this because BinaryWriter uses GetBytes() internally. Furthermore the linked documentation doesn't tell why the memory layout of the structure differs from the layout returned by GetBytes() whereas my answer does.

    – Matthias
    Jan 2 at 23:37













  • You have your reasoning backwards. GetBits() is the starting point because it is documented. What it returns cannot change. From this starting point we can see that internal GetBytes() returns the same data as the documented GetBits(), but always in the little endian format (whereas GetBits() will naturally use the system's current endianness) - which makes sense for portability between systems with different endianness. These two methods will not change (that would break ability to load decimals persisted to storage before the hypothetical change).

    – GSerg
    Jan 3 at 8:26











  • The internal order of private fields comprising the decimal structure, on contrary, may easily change at any moment because that would be purely internal to the framework - but in reality it is not going to happen because there is no reason to make such change (e.g. introducing new fields would break GetBits and GetBytes; if that had to be done, they would come up with a new type, e.g. decimal2). But even with that being the case, the strict answer is No, there is no guarantee that what you see by casting decimal* to byte* will not change, and you should not rely on it.

    – GSerg
    Jan 3 at 8:26


















2














Microsoft itself tried to keep the decimal and the alignment of it's components as steady as possible. You can also see this in the mentioned referencesource of the .NET framework:



// NOTE: Do not change the order in which these fields are declared. The
// native methods in this class rely on this particular order.
private int flags;
private int hi;
private int lo;
private int mid;


Together with the usage of [StructLayout(LayoutKind.Sequential)] the structure gets aligned in exactly that way in the memory.



You get wrong results because of the GetBytes method using the variables which are building the data of the decimal internally not in the order they are aligned in the structure itself:



internal static void GetBytes(Decimal d, byte buffer)
{
Contract.Requires((buffer != null && buffer.Length >= 16), "[GetBytes]buffer != null && buffer.Length >= 16");
buffer[0] = (byte)d.lo;
buffer[1] = (byte)(d.lo >> 8);
buffer[2] = (byte)(d.lo >> 16);
buffer[3] = (byte)(d.lo >> 24);

buffer[4] = (byte)d.mid;
buffer[5] = (byte)(d.mid >> 8);
buffer[6] = (byte)(d.mid >> 16);
buffer[7] = (byte)(d.mid >> 24);

buffer[8] = (byte)d.hi;
buffer[9] = (byte)(d.hi >> 8);
buffer[10] = (byte)(d.hi >> 16);
buffer[11] = (byte)(d.hi >> 24);

buffer[12] = (byte)d.flags;
buffer[13] = (byte)(d.flags >> 8);
buffer[14] = (byte)(d.flags >> 16);
buffer[15] = (byte)(d.flags >> 24);
}


It seems to me that the corresponding .NET developer tried to adapt the format presented by GetBytes to little endian, but made one mistake. He didn't only order the bytes of the components of the decimal but also the components itself. (flags, hi, lo, mid becomes lo, mid, hi, flags.) But little endian layout is only adapted to fields not to whole structs - especially with [StructLayout(LayoutKind.Sequential)].



My advice here is usually to use the methods Microsoft offers in their classes. So I would prefer any GetBytes or GetBits based way to serialize the data than doing it with unsafe because Microsoft will keep the compatibility to the BinaryWriter in any way. However, the comments are kinda serious and I wouldn't expect microsoft to break the .NET framework on this very basic level.



It's hard for me to believe that performance matters that strongly to favour the unsafe way over GetBits. After all we are talking about decimals here. You still can push the int of GetBits via unsafe into your byte.






share|improve this answer


























  • Internal structure of decimal is documented in the Decimal(Int32) constructor. It also explains why decimal.GetBits() returns an array of four ints, in a specific order, and what they mean. I do not see where there would be a bug with incorrect byte order.

    – GSerg
    Jan 2 at 22:17













  • @GSerg As already mentioned in other comments: GetBits() is irrelevant for this because BinaryWriter uses GetBytes() internally. Furthermore the linked documentation doesn't tell why the memory layout of the structure differs from the layout returned by GetBytes() whereas my answer does.

    – Matthias
    Jan 2 at 23:37













  • You have your reasoning backwards. GetBits() is the starting point because it is documented. What it returns cannot change. From this starting point we can see that internal GetBytes() returns the same data as the documented GetBits(), but always in the little endian format (whereas GetBits() will naturally use the system's current endianness) - which makes sense for portability between systems with different endianness. These two methods will not change (that would break ability to load decimals persisted to storage before the hypothetical change).

    – GSerg
    Jan 3 at 8:26











  • The internal order of private fields comprising the decimal structure, on contrary, may easily change at any moment because that would be purely internal to the framework - but in reality it is not going to happen because there is no reason to make such change (e.g. introducing new fields would break GetBits and GetBytes; if that had to be done, they would come up with a new type, e.g. decimal2). But even with that being the case, the strict answer is No, there is no guarantee that what you see by casting decimal* to byte* will not change, and you should not rely on it.

    – GSerg
    Jan 3 at 8:26
















2












2








2







Microsoft itself tried to keep the decimal and the alignment of it's components as steady as possible. You can also see this in the mentioned referencesource of the .NET framework:



// NOTE: Do not change the order in which these fields are declared. The
// native methods in this class rely on this particular order.
private int flags;
private int hi;
private int lo;
private int mid;


Together with the usage of [StructLayout(LayoutKind.Sequential)] the structure gets aligned in exactly that way in the memory.



You get wrong results because of the GetBytes method using the variables which are building the data of the decimal internally not in the order they are aligned in the structure itself:



internal static void GetBytes(Decimal d, byte buffer)
{
Contract.Requires((buffer != null && buffer.Length >= 16), "[GetBytes]buffer != null && buffer.Length >= 16");
buffer[0] = (byte)d.lo;
buffer[1] = (byte)(d.lo >> 8);
buffer[2] = (byte)(d.lo >> 16);
buffer[3] = (byte)(d.lo >> 24);

buffer[4] = (byte)d.mid;
buffer[5] = (byte)(d.mid >> 8);
buffer[6] = (byte)(d.mid >> 16);
buffer[7] = (byte)(d.mid >> 24);

buffer[8] = (byte)d.hi;
buffer[9] = (byte)(d.hi >> 8);
buffer[10] = (byte)(d.hi >> 16);
buffer[11] = (byte)(d.hi >> 24);

buffer[12] = (byte)d.flags;
buffer[13] = (byte)(d.flags >> 8);
buffer[14] = (byte)(d.flags >> 16);
buffer[15] = (byte)(d.flags >> 24);
}


It seems to me that the corresponding .NET developer tried to adapt the format presented by GetBytes to little endian, but made one mistake. He didn't only order the bytes of the components of the decimal but also the components itself. (flags, hi, lo, mid becomes lo, mid, hi, flags.) But little endian layout is only adapted to fields not to whole structs - especially with [StructLayout(LayoutKind.Sequential)].



My advice here is usually to use the methods Microsoft offers in their classes. So I would prefer any GetBytes or GetBits based way to serialize the data than doing it with unsafe because Microsoft will keep the compatibility to the BinaryWriter in any way. However, the comments are kinda serious and I wouldn't expect microsoft to break the .NET framework on this very basic level.



It's hard for me to believe that performance matters that strongly to favour the unsafe way over GetBits. After all we are talking about decimals here. You still can push the int of GetBits via unsafe into your byte.






share|improve this answer















Microsoft itself tried to keep the decimal and the alignment of it's components as steady as possible. You can also see this in the mentioned referencesource of the .NET framework:



// NOTE: Do not change the order in which these fields are declared. The
// native methods in this class rely on this particular order.
private int flags;
private int hi;
private int lo;
private int mid;


Together with the usage of [StructLayout(LayoutKind.Sequential)] the structure gets aligned in exactly that way in the memory.



You get wrong results because of the GetBytes method using the variables which are building the data of the decimal internally not in the order they are aligned in the structure itself:



internal static void GetBytes(Decimal d, byte buffer)
{
Contract.Requires((buffer != null && buffer.Length >= 16), "[GetBytes]buffer != null && buffer.Length >= 16");
buffer[0] = (byte)d.lo;
buffer[1] = (byte)(d.lo >> 8);
buffer[2] = (byte)(d.lo >> 16);
buffer[3] = (byte)(d.lo >> 24);

buffer[4] = (byte)d.mid;
buffer[5] = (byte)(d.mid >> 8);
buffer[6] = (byte)(d.mid >> 16);
buffer[7] = (byte)(d.mid >> 24);

buffer[8] = (byte)d.hi;
buffer[9] = (byte)(d.hi >> 8);
buffer[10] = (byte)(d.hi >> 16);
buffer[11] = (byte)(d.hi >> 24);

buffer[12] = (byte)d.flags;
buffer[13] = (byte)(d.flags >> 8);
buffer[14] = (byte)(d.flags >> 16);
buffer[15] = (byte)(d.flags >> 24);
}


It seems to me that the corresponding .NET developer tried to adapt the format presented by GetBytes to little endian, but made one mistake. He didn't only order the bytes of the components of the decimal but also the components itself. (flags, hi, lo, mid becomes lo, mid, hi, flags.) But little endian layout is only adapted to fields not to whole structs - especially with [StructLayout(LayoutKind.Sequential)].



My advice here is usually to use the methods Microsoft offers in their classes. So I would prefer any GetBytes or GetBits based way to serialize the data than doing it with unsafe because Microsoft will keep the compatibility to the BinaryWriter in any way. However, the comments are kinda serious and I wouldn't expect microsoft to break the .NET framework on this very basic level.



It's hard for me to believe that performance matters that strongly to favour the unsafe way over GetBits. After all we are talking about decimals here. You still can push the int of GetBits via unsafe into your byte.







share|improve this answer














share|improve this answer



share|improve this answer








edited Jan 2 at 19:11

























answered Jan 2 at 18:31









MatthiasMatthias

383215




383215













  • Internal structure of decimal is documented in the Decimal(Int32) constructor. It also explains why decimal.GetBits() returns an array of four ints, in a specific order, and what they mean. I do not see where there would be a bug with incorrect byte order.

    – GSerg
    Jan 2 at 22:17













  • @GSerg As already mentioned in other comments: GetBits() is irrelevant for this because BinaryWriter uses GetBytes() internally. Furthermore the linked documentation doesn't tell why the memory layout of the structure differs from the layout returned by GetBytes() whereas my answer does.

    – Matthias
    Jan 2 at 23:37













  • You have your reasoning backwards. GetBits() is the starting point because it is documented. What it returns cannot change. From this starting point we can see that internal GetBytes() returns the same data as the documented GetBits(), but always in the little endian format (whereas GetBits() will naturally use the system's current endianness) - which makes sense for portability between systems with different endianness. These two methods will not change (that would break ability to load decimals persisted to storage before the hypothetical change).

    – GSerg
    Jan 3 at 8:26











  • The internal order of private fields comprising the decimal structure, on contrary, may easily change at any moment because that would be purely internal to the framework - but in reality it is not going to happen because there is no reason to make such change (e.g. introducing new fields would break GetBits and GetBytes; if that had to be done, they would come up with a new type, e.g. decimal2). But even with that being the case, the strict answer is No, there is no guarantee that what you see by casting decimal* to byte* will not change, and you should not rely on it.

    – GSerg
    Jan 3 at 8:26





















  • Internal structure of decimal is documented in the Decimal(Int32) constructor. It also explains why decimal.GetBits() returns an array of four ints, in a specific order, and what they mean. I do not see where there would be a bug with incorrect byte order.

    – GSerg
    Jan 2 at 22:17













  • @GSerg As already mentioned in other comments: GetBits() is irrelevant for this because BinaryWriter uses GetBytes() internally. Furthermore the linked documentation doesn't tell why the memory layout of the structure differs from the layout returned by GetBytes() whereas my answer does.

    – Matthias
    Jan 2 at 23:37













  • You have your reasoning backwards. GetBits() is the starting point because it is documented. What it returns cannot change. From this starting point we can see that internal GetBytes() returns the same data as the documented GetBits(), but always in the little endian format (whereas GetBits() will naturally use the system's current endianness) - which makes sense for portability between systems with different endianness. These two methods will not change (that would break ability to load decimals persisted to storage before the hypothetical change).

    – GSerg
    Jan 3 at 8:26











  • The internal order of private fields comprising the decimal structure, on contrary, may easily change at any moment because that would be purely internal to the framework - but in reality it is not going to happen because there is no reason to make such change (e.g. introducing new fields would break GetBits and GetBytes; if that had to be done, they would come up with a new type, e.g. decimal2). But even with that being the case, the strict answer is No, there is no guarantee that what you see by casting decimal* to byte* will not change, and you should not rely on it.

    – GSerg
    Jan 3 at 8:26



















Internal structure of decimal is documented in the Decimal(Int32) constructor. It also explains why decimal.GetBits() returns an array of four ints, in a specific order, and what they mean. I do not see where there would be a bug with incorrect byte order.

– GSerg
Jan 2 at 22:17







Internal structure of decimal is documented in the Decimal(Int32) constructor. It also explains why decimal.GetBits() returns an array of four ints, in a specific order, and what they mean. I do not see where there would be a bug with incorrect byte order.

– GSerg
Jan 2 at 22:17















@GSerg As already mentioned in other comments: GetBits() is irrelevant for this because BinaryWriter uses GetBytes() internally. Furthermore the linked documentation doesn't tell why the memory layout of the structure differs from the layout returned by GetBytes() whereas my answer does.

– Matthias
Jan 2 at 23:37







@GSerg As already mentioned in other comments: GetBits() is irrelevant for this because BinaryWriter uses GetBytes() internally. Furthermore the linked documentation doesn't tell why the memory layout of the structure differs from the layout returned by GetBytes() whereas my answer does.

– Matthias
Jan 2 at 23:37















You have your reasoning backwards. GetBits() is the starting point because it is documented. What it returns cannot change. From this starting point we can see that internal GetBytes() returns the same data as the documented GetBits(), but always in the little endian format (whereas GetBits() will naturally use the system's current endianness) - which makes sense for portability between systems with different endianness. These two methods will not change (that would break ability to load decimals persisted to storage before the hypothetical change).

– GSerg
Jan 3 at 8:26





You have your reasoning backwards. GetBits() is the starting point because it is documented. What it returns cannot change. From this starting point we can see that internal GetBytes() returns the same data as the documented GetBits(), but always in the little endian format (whereas GetBits() will naturally use the system's current endianness) - which makes sense for portability between systems with different endianness. These two methods will not change (that would break ability to load decimals persisted to storage before the hypothetical change).

– GSerg
Jan 3 at 8:26













The internal order of private fields comprising the decimal structure, on contrary, may easily change at any moment because that would be purely internal to the framework - but in reality it is not going to happen because there is no reason to make such change (e.g. introducing new fields would break GetBits and GetBytes; if that had to be done, they would come up with a new type, e.g. decimal2). But even with that being the case, the strict answer is No, there is no guarantee that what you see by casting decimal* to byte* will not change, and you should not rely on it.

– GSerg
Jan 3 at 8:26







The internal order of private fields comprising the decimal structure, on contrary, may easily change at any moment because that would be purely internal to the framework - but in reality it is not going to happen because there is no reason to make such change (e.g. introducing new fields would break GetBits and GetBytes; if that had to be done, they would come up with a new type, e.g. decimal2). But even with that being the case, the strict answer is No, there is no guarantee that what you see by casting decimal* to byte* will not change, and you should not rely on it.

– GSerg
Jan 3 at 8:26






















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54008281%2fdecimald-xxxm-results-in-another-output-than-binarywriter-writexxxm%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Can a sorcerer learn a 5th-level spell early by creating spell slots using the Font of Magic feature?

Does disintegrating a polymorphed enemy still kill it after the 2018 errata?

A Topological Invariant for $pi_3(U(n))$