Own implementation of Lazy object











up vote
19
down vote

favorite
5












The problem with the original Lazy in C# is that you have to put the initialization in the constructor if you want to refer to this.



For me that is 95% of the cases. Meaning that I have to put half of my logic concerning the lazy property in the constructor, and having more boilerplate code.



See this question: https://stackoverflow.com/questions/53271662/concise-way-to-writy-lazy-loaded-properties-in-c-sharp/53316998#53316998



Now I made a new version where I actually move the initialization part to the 'Get' moment. This saves boilerplate code and I can group everything about this property together.



But:




  • is the current implementation 'safe'?

  • are there any important performance considerations vs the original one?

  • is there anything else I'm missing that I should be concerned about in a high traffic application?


Class:



public class MyLazy
{
private object _Value;
private bool _Loaded;
private object _Lock = new object();


public T Get<T>(Func<T> create)
{
if ( _Loaded )
return (T)_Value;

lock (_Lock)
{
if ( _Loaded ) // double checked lock
return (T)_Value;

_Value = create();
_Loaded = true;
}

return (T)_Value;
}


public void Invalidate()
{
lock ( _Lock )
_Loaded = false;
}
}


Use:



MyLazy _SubEvents = new MyLazy();
public SubEvents SubEvents => _SubEvents.Get(() =>
{
// [....]
return x;
});


Or like:



MyLazy _SubEvents = new MyLazy();
public SubEvents SubEvents => _SubEvents.Get(LoadSubEvents);

public void LoadSubEvents()
{
// [....]
return x;
}









share|improve this question
























  • @AdrianoRepetti this would make a great answer ;-)
    – t3chb0t
    Nov 15 at 10:44










  • @AdrianoRepetti, I actually agree, so I provided the second example (how I usually use it). Problem is that I indeed 95% of the time need to access instance variables, and putting in the constructor I really dislike. Because if you have 5+ properties like that you're moving actually a lot of logic into your constructor meaning you really have to jump around in your code because the rest of the boilerplate actually needs to be outside the constructor.
    – Dirk Boer
    Nov 15 at 18:57






  • 1




    Mod Note: Please do not use comments to lead extended discussions about a question and about how to write correct threadsafe code. Comments have been purged. You're all very welcome to continue the discussion in Code Review Chat if you want :)
    – Vogel612
    Nov 16 at 11:05






  • 1




    Source code of System.Layz<T>: referencesource.microsoft.com/#mscorlib/system/… (put for curious people)
    – Orkhan Alikhanov
    2 days ago










  • Hi @OrkhanAlikhanov, thanks! I tried stripping a lot of that source to try to get to the core. See the results in one of the answers that I posted if you think it's interesting.
    – Dirk Boer
    2 days ago















up vote
19
down vote

favorite
5












The problem with the original Lazy in C# is that you have to put the initialization in the constructor if you want to refer to this.



For me that is 95% of the cases. Meaning that I have to put half of my logic concerning the lazy property in the constructor, and having more boilerplate code.



See this question: https://stackoverflow.com/questions/53271662/concise-way-to-writy-lazy-loaded-properties-in-c-sharp/53316998#53316998



Now I made a new version where I actually move the initialization part to the 'Get' moment. This saves boilerplate code and I can group everything about this property together.



But:




  • is the current implementation 'safe'?

  • are there any important performance considerations vs the original one?

  • is there anything else I'm missing that I should be concerned about in a high traffic application?


Class:



public class MyLazy
{
private object _Value;
private bool _Loaded;
private object _Lock = new object();


public T Get<T>(Func<T> create)
{
if ( _Loaded )
return (T)_Value;

lock (_Lock)
{
if ( _Loaded ) // double checked lock
return (T)_Value;

_Value = create();
_Loaded = true;
}

return (T)_Value;
}


public void Invalidate()
{
lock ( _Lock )
_Loaded = false;
}
}


Use:



MyLazy _SubEvents = new MyLazy();
public SubEvents SubEvents => _SubEvents.Get(() =>
{
// [....]
return x;
});


Or like:



MyLazy _SubEvents = new MyLazy();
public SubEvents SubEvents => _SubEvents.Get(LoadSubEvents);

public void LoadSubEvents()
{
// [....]
return x;
}









share|improve this question
























  • @AdrianoRepetti this would make a great answer ;-)
    – t3chb0t
    Nov 15 at 10:44










  • @AdrianoRepetti, I actually agree, so I provided the second example (how I usually use it). Problem is that I indeed 95% of the time need to access instance variables, and putting in the constructor I really dislike. Because if you have 5+ properties like that you're moving actually a lot of logic into your constructor meaning you really have to jump around in your code because the rest of the boilerplate actually needs to be outside the constructor.
    – Dirk Boer
    Nov 15 at 18:57






  • 1




    Mod Note: Please do not use comments to lead extended discussions about a question and about how to write correct threadsafe code. Comments have been purged. You're all very welcome to continue the discussion in Code Review Chat if you want :)
    – Vogel612
    Nov 16 at 11:05






  • 1




    Source code of System.Layz<T>: referencesource.microsoft.com/#mscorlib/system/… (put for curious people)
    – Orkhan Alikhanov
    2 days ago










  • Hi @OrkhanAlikhanov, thanks! I tried stripping a lot of that source to try to get to the core. See the results in one of the answers that I posted if you think it's interesting.
    – Dirk Boer
    2 days ago













up vote
19
down vote

favorite
5









up vote
19
down vote

favorite
5






5





The problem with the original Lazy in C# is that you have to put the initialization in the constructor if you want to refer to this.



For me that is 95% of the cases. Meaning that I have to put half of my logic concerning the lazy property in the constructor, and having more boilerplate code.



See this question: https://stackoverflow.com/questions/53271662/concise-way-to-writy-lazy-loaded-properties-in-c-sharp/53316998#53316998



Now I made a new version where I actually move the initialization part to the 'Get' moment. This saves boilerplate code and I can group everything about this property together.



But:




  • is the current implementation 'safe'?

  • are there any important performance considerations vs the original one?

  • is there anything else I'm missing that I should be concerned about in a high traffic application?


Class:



public class MyLazy
{
private object _Value;
private bool _Loaded;
private object _Lock = new object();


public T Get<T>(Func<T> create)
{
if ( _Loaded )
return (T)_Value;

lock (_Lock)
{
if ( _Loaded ) // double checked lock
return (T)_Value;

_Value = create();
_Loaded = true;
}

return (T)_Value;
}


public void Invalidate()
{
lock ( _Lock )
_Loaded = false;
}
}


Use:



MyLazy _SubEvents = new MyLazy();
public SubEvents SubEvents => _SubEvents.Get(() =>
{
// [....]
return x;
});


Or like:



MyLazy _SubEvents = new MyLazy();
public SubEvents SubEvents => _SubEvents.Get(LoadSubEvents);

public void LoadSubEvents()
{
// [....]
return x;
}









share|improve this question















The problem with the original Lazy in C# is that you have to put the initialization in the constructor if you want to refer to this.



For me that is 95% of the cases. Meaning that I have to put half of my logic concerning the lazy property in the constructor, and having more boilerplate code.



See this question: https://stackoverflow.com/questions/53271662/concise-way-to-writy-lazy-loaded-properties-in-c-sharp/53316998#53316998



Now I made a new version where I actually move the initialization part to the 'Get' moment. This saves boilerplate code and I can group everything about this property together.



But:




  • is the current implementation 'safe'?

  • are there any important performance considerations vs the original one?

  • is there anything else I'm missing that I should be concerned about in a high traffic application?


Class:



public class MyLazy
{
private object _Value;
private bool _Loaded;
private object _Lock = new object();


public T Get<T>(Func<T> create)
{
if ( _Loaded )
return (T)_Value;

lock (_Lock)
{
if ( _Loaded ) // double checked lock
return (T)_Value;

_Value = create();
_Loaded = true;
}

return (T)_Value;
}


public void Invalidate()
{
lock ( _Lock )
_Loaded = false;
}
}


Use:



MyLazy _SubEvents = new MyLazy();
public SubEvents SubEvents => _SubEvents.Get(() =>
{
// [....]
return x;
});


Or like:



MyLazy _SubEvents = new MyLazy();
public SubEvents SubEvents => _SubEvents.Get(LoadSubEvents);

public void LoadSubEvents()
{
// [....]
return x;
}






c# generics lazy






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 16 at 10:56









Heslacher

44.5k460154




44.5k460154










asked Nov 15 at 10:15









Dirk Boer

356210




356210












  • @AdrianoRepetti this would make a great answer ;-)
    – t3chb0t
    Nov 15 at 10:44










  • @AdrianoRepetti, I actually agree, so I provided the second example (how I usually use it). Problem is that I indeed 95% of the time need to access instance variables, and putting in the constructor I really dislike. Because if you have 5+ properties like that you're moving actually a lot of logic into your constructor meaning you really have to jump around in your code because the rest of the boilerplate actually needs to be outside the constructor.
    – Dirk Boer
    Nov 15 at 18:57






  • 1




    Mod Note: Please do not use comments to lead extended discussions about a question and about how to write correct threadsafe code. Comments have been purged. You're all very welcome to continue the discussion in Code Review Chat if you want :)
    – Vogel612
    Nov 16 at 11:05






  • 1




    Source code of System.Layz<T>: referencesource.microsoft.com/#mscorlib/system/… (put for curious people)
    – Orkhan Alikhanov
    2 days ago










  • Hi @OrkhanAlikhanov, thanks! I tried stripping a lot of that source to try to get to the core. See the results in one of the answers that I posted if you think it's interesting.
    – Dirk Boer
    2 days ago


















  • @AdrianoRepetti this would make a great answer ;-)
    – t3chb0t
    Nov 15 at 10:44










  • @AdrianoRepetti, I actually agree, so I provided the second example (how I usually use it). Problem is that I indeed 95% of the time need to access instance variables, and putting in the constructor I really dislike. Because if you have 5+ properties like that you're moving actually a lot of logic into your constructor meaning you really have to jump around in your code because the rest of the boilerplate actually needs to be outside the constructor.
    – Dirk Boer
    Nov 15 at 18:57






  • 1




    Mod Note: Please do not use comments to lead extended discussions about a question and about how to write correct threadsafe code. Comments have been purged. You're all very welcome to continue the discussion in Code Review Chat if you want :)
    – Vogel612
    Nov 16 at 11:05






  • 1




    Source code of System.Layz<T>: referencesource.microsoft.com/#mscorlib/system/… (put for curious people)
    – Orkhan Alikhanov
    2 days ago










  • Hi @OrkhanAlikhanov, thanks! I tried stripping a lot of that source to try to get to the core. See the results in one of the answers that I posted if you think it's interesting.
    – Dirk Boer
    2 days ago
















@AdrianoRepetti this would make a great answer ;-)
– t3chb0t
Nov 15 at 10:44




@AdrianoRepetti this would make a great answer ;-)
– t3chb0t
Nov 15 at 10:44












@AdrianoRepetti, I actually agree, so I provided the second example (how I usually use it). Problem is that I indeed 95% of the time need to access instance variables, and putting in the constructor I really dislike. Because if you have 5+ properties like that you're moving actually a lot of logic into your constructor meaning you really have to jump around in your code because the rest of the boilerplate actually needs to be outside the constructor.
– Dirk Boer
Nov 15 at 18:57




@AdrianoRepetti, I actually agree, so I provided the second example (how I usually use it). Problem is that I indeed 95% of the time need to access instance variables, and putting in the constructor I really dislike. Because if you have 5+ properties like that you're moving actually a lot of logic into your constructor meaning you really have to jump around in your code because the rest of the boilerplate actually needs to be outside the constructor.
– Dirk Boer
Nov 15 at 18:57




1




1




Mod Note: Please do not use comments to lead extended discussions about a question and about how to write correct threadsafe code. Comments have been purged. You're all very welcome to continue the discussion in Code Review Chat if you want :)
– Vogel612
Nov 16 at 11:05




Mod Note: Please do not use comments to lead extended discussions about a question and about how to write correct threadsafe code. Comments have been purged. You're all very welcome to continue the discussion in Code Review Chat if you want :)
– Vogel612
Nov 16 at 11:05




1




1




Source code of System.Layz<T>: referencesource.microsoft.com/#mscorlib/system/… (put for curious people)
– Orkhan Alikhanov
2 days ago




Source code of System.Layz<T>: referencesource.microsoft.com/#mscorlib/system/… (put for curious people)
– Orkhan Alikhanov
2 days ago












Hi @OrkhanAlikhanov, thanks! I tried stripping a lot of that source to try to get to the core. See the results in one of the answers that I posted if you think it's interesting.
– Dirk Boer
2 days ago




Hi @OrkhanAlikhanov, thanks! I tried stripping a lot of that source to try to get to the core. See the results in one of the answers that I posted if you think it's interesting.
– Dirk Boer
2 days ago










11 Answers
11






active

oldest

votes

















up vote
15
down vote



accepted










Having backported Lazy to .NET 2.0, I want to offer a variant...



First of all, this looks a lot like LazyInitializer. If that works for you, then there is no need to go about creating a Lazy<T> variant.



With that said, let's continue...





Before going into the code, let us see some things that may go wrong:




  • The ABA problem

  • Thread visibility/Code reordering

  • Double initialization

  • Value factory recursion

  • Failed initialization

  • Preventing Garbage Collection


The code on your question will fail on visibility. Other threads will not be aware that you changed _Loaded. In fact, it can fall victim to the value of _Loaded being optimized, so that it is only read once per method call. It will be problematic to have create call Get<T>, it would overwrite _Value... meaning that _Value can change after initialized.



Also, execution order could be reordered to set _Loaded before calling create.





The second version:



volatile is not enough guarantee here. While you have _Loaded volatile, changes to _Value could not be visible. Thus, for another thread, it could reach the object in an state where it claims to be loaded but _Value appears unchanged. In fact, it could appear partially changed (torn read).



Note: A caveat, given that I am used to target multiple versions of .NET, I consider wrong something that would fail in any of them. And that includes .NET 2.0, .NET 3.0 and .NET 3.5. Which, have broken volatile semantics. This makes me extra careful around volatile.



We still have to worry about create calling Get.



I am glad to see T moved to the class level.





The third version:



The initialization has been moved to the constructor. I think this defies the purpose, if you can have it in the constructor... why don't you use Lazy<T>?



The partially changed problem is fixed by using Boxed. You could have used StrongBox<T> here.



I see you inherited some weirdness from the reference source... The reason they check Boxed that weirdly (also the reason why they do not use StrongBox<T>) is because it might not contain a value, but a cached Exception.



In your previous version, when create throws, Get throws, and another thread gets a chance to initialize. However, it is possible to initialize Lazy<T> in such way that it stores the exception and throws it for every thread that tries to read it (see LazyThreadSafetyMode).



Aside from that, the use of Volatile fixed the other visibility and reordering problems.



Note: When factory throws, m_valueFactory has already been set to the sentinel. Meaning that next thread to call will run the sentinel, resulting in the Value being initialized to default(T). This is probably not what you want.






  • Can it be simplified to a lock?




    No. As you have already guessed.





  • Can the sentinel be replaced by a boolean and a null assignment for the factory?




    Depending on what you want (see note above), yes. The reference source has it the way it has it because of LazyThreadSafetyMode.





  • When you assume the logic is good, can the whole CreateValue() be replaced by an inline new Boxed(factory())?




    No. You could have boxed = new Boxed(m_valueFactory());. However, you are still holding a reference to m_valueFactory, which might prevent garbage collection. Also, in case of recursion, this would be an StackOverflowException. You may set m_valueFactory to null... However, because of the possibility of recursion, that could lead to a NullReferenceException. Thus, we need to set it to a sentinel or set it to null and add a null check. Either way, CreateValue is not a one liner.






The fourth version:



Be careful with the writes to _value, they might not be atomic when it is a value type. There is a good reason why the prior version had boxed: another thread could see a partially initialized value (torn reads). This is safe in .NET Core, but not in .NET Framework.



Note: This code actually works well in modern .NET, in particular in .NET Core.



I would fix this by using StrongBox<T>. Something similar to firda's answer.



About firda’s answer… It holds onto valueFactory, keeping that references after initialization keeps alive whatever it points to, preventing the garbage collector from taking it. This is by design. It might not be what you want. One possible path to modify it is to set factory to null at the end of the lock in GetOrCreate (before returning, of course).



Back to the problem of torn reads, consider this demonstration:



public struct Long8
{
public long D1;
public long D2;
public long D3;
public long D4;
public long D5;
public long D6;
public long D7;
public long D8;

public long Val => D1 + D8;
}

public class Program
{
public static void Main()
{
// We are using Lonh8 because writing to it would not be atomic
// Here DataInit is poor man's lazy
var dataInit = new DataInit<Long8>();
// value will hold the values that the reader gets
var value = default(Long8);
// we use done to signal the writer to stop
var done = 0;
var reader = Task.Run
(
() =>
{
// We are reading the values set by the writer
// We expect Val to be 1 because
// the writer does not write something that would give a different value
var spinWait = new SpinWait();
do
{
// This loop is here to wait for the writer to set a value since the last call to invalidate
while (!dataInit.TryGet(out value))
{
spinWait.SpinOnce();
}
// dataInit.Invalidate();
// Console.Write(".");
spinWait.SpinOnce();
} while (value.Val == 1);
// Console.WriteLine();
}
);
var writer = Task.Run
(
() =>
{
// Here we will write { D1 = 1, D8 = 0 } and { D1 = 0, D8 = 1 }
// In either case Val should be 1
var spinWait = new SpinWait();
while (Volatile.Read(ref done) == 0)
{
dataInit.Init(new Long8 { D1 = 1, D8 = 0 });
spinWait.SpinOnce();
dataInit.Invalidate();
dataInit.Init(new Long8 { D1 = 0, D8 = 1 });
spinWait.SpinOnce();
dataInit.Invalidate();
}
}
);
reader.Wait();
System.Diagnostics.Debugger.Break();
Volatile.Write(ref done, 1);
writer.Wait();
Console.WriteLine(value.Val);
}

public class DataInit<T>
{
private T _data;
private volatile bool _initialized;

public void Init(T value)
{
// So, we write to a generic field
// And then to a volatile bool to notify that we did write to the field
// Yet, when T is a value type larger than the word size of the CPU
// Another thread may find that _initialized = true but _data is not what we would expect
_data = value; // Write 1 <----------------
_initialized = true; // Write 2 (volatile) <------
}

public void Invalidate()
{
_initialized = false;
}

public bool TryGet(out T value)
{
// Here we are reading the volatile before we read the other field...
// Does it make any difference?
if (_initialized)
{
value = _data;
if (_initialized)
{
return true;
}
}
value = default(T);
return false;
}
}
}


This code demonstrates a torn read. In either .NET Framework and .NET Core. What is shows is that can read _data in between _initialised = false; _data = value; _initialised = true; resulting is something partially set. To be more specific: The reader reads that the value is initialized and start reading, meanwhile the other thread is modifying the value. Your code does not suffer this particular fate as long as it does not have Invalidate (as firda mentions on the comments). This is something to be careful with.



This is based Reproduce torn reads of decimal in c# with some modifications introduced by VisualMelon, this is for demonstration purposes only, this is not something you want in production.



Also consider the cases where valueFactory read Value and where valueFactory throws.






  • I expect that we can change it for object though? saving the seperate class.




    Yes, you can use object instead of LazyHelper.





  • is it allowed to inline some other methods, like ViaFactory?




    Yes, in fact, the compiler may do this optimization.





  • it feels to me a bit strange that Value being called recursively the first time but that might have to do with other code that I stripped - i.e. the Exception caching.




    I am not sure why they went with this design in corefx. However - at least for your code - it is a better design to have ViaFactory and ExecutionAndPublication return than to read the field again.





  • Not sure if it is allowed, but if I even inline more you really get a 'normal' double checked lock with a volatile. The main difference is that the locker object is also being used as a _loaded flag.




    Yeah, sure. The first thread will set _state = null; and then the second thread gets to enter the lock, but you do not want it to call _factory again. So, you check if it is still what you set it to (it isn't, it is null) and that allows to correctly decide to not enter the if.






Alright, now I get to write one...



The requirements I will take are:




  • To be thread-safe

  • To allow lazy initialization

  • To allow invalidation

  • To pass the valueFactory on Get


I will also add a TryGet method, for those threads that want to get the value if it is initialized but do not want to attempt to initialize it.



First I have made one based on firda's:



public class MyLazy<T>
{
private readonly object _syncroot = new object();
private StrongBox<T> _value;

public T Get(Func<T> valueFactory)
{
if (valueFactory == null)
{
throw new ArgumentNullException(nameof(valueFactory));
}
var box = Volatile.Read(ref _value);
return box != null ? box.Value : GetOrCreate();

T GetOrCreate()
{
lock (_syncroot)
{
box = Volatile.Read(ref _value);
if (box != null)
{
return box.Value;
}
box = new StrongBox<T>(valueFactory());
Volatile.Write(ref _value, box);
return box.Value;
}
}
}

public void Invalidate()
{
Volatile.Write(ref _value, null);
}

public bool TryGet(out T value)
{
var box = Volatile.Read(ref _value);
if (box != null)
{
value = box.Value;
return true;
}
value = default(T);
return false;
}
}


Let us be clear, firda's is a good solution. This one mainly differs in that I am not passing valueFactory to the constructor (I am also using StrongBox<T> because there is no reason to not use the provided API). As you mention in the question "95% of the cases" you can have valueFactory in the constructor, this is for the rest of the cases.



Notice that when Invalidate is called, the next thread to enter will run the valueFactory.



And for the kicks I have done one without monitor (lock). This version uses ManualResetEventSlim instead. This means that it can awake threads all at once instead of one by one. Which would be good for the situation where there are many threads running in parallel. However, it requires extra work to work correctly. For starters, if valueFactory throws, we got to awake the threads, have one of them try to initialize, and have the rest go to wait again… do this until they all fail or one succeeds, meaning that we need to loop. Furthermore, with lock we get reentry handled for us automatically, with ManualResetEventSlim we got to handle that.



Also, I recommend to have a look at Interlocked.CompareExchange.



public class Lazy<T>
{
private readonly ManualResetEventSlim _event;
private Thread _ownerThread;
private int _status;
private StrongBox<T> _value;

public Lazy()
{
_event = new ManualResetEventSlim(false);
}

public T Get(Func<T> valueFactory)
{
// _status == 0 :: Not Initialized
// _status == 1 :: Initializing or Initialized
// _ownerThread == null :: Not Initialized or Initialized
// _ownerThread != null :: Initializing
// _value == null :: Not Initialized or Initializing
// _value != null :: Initialized
if (valueFactory == null)
{
throw new ArgumentNullException(nameof(valueFactory));
}
var ownerThread = Volatile.Read(ref _ownerThread);
if (ownerThread == Thread.CurrentThread)
{
// We have reentry, this is the same thread that is initializing on the value
// We could:
// 1. return default(T);
// 2. throw new LockRecursionException()
// 3. let it in
// We are doing that last one
return Create();
}
// Note: It is ok to loop here. Since Threads will only ever change _ownerThread to themselves...
// there is no chance that _ownerThread suddently changes to the current thread unexpectedly
// Why we loop? See at the bottom of the loop.
while (true)
{
ownerThread = Volatile.Read(ref _ownerThread);
if (ownerThread == null)
{
// There is no thread initializing currently
// Either the value has already been initialized or not
// Check status to tell
if (Volatile.Read(ref _status) == 1)
{
// Value has already been initialized
var box = Volatile.Read(ref _value);
if (box == null)
{
// Another thread invalidated after we did read _status but before we got Value
continue;
}
return box.Value;
}
// Value is yet to be initialized
// Try to get the right to initialize, Interlocked.CompareExchange only one thread gets in
var found = Interlocked.CompareExchange(ref _status, 1, 0);
if (found == 0)
{
// We got the right to initialize
var result = default(T);
try
{
try
{
Volatile.Write(ref _ownerThread, Thread.CurrentThread);
result = Create();
}
finally
{
// Other threads could be waiting. We need to let them advance.
_event.Set();
// We want to make sure _ownerThread is null so another thread can enter
Volatile.Write(ref _ownerThread, null);
}
}
catch (Exception exception) // Just appeasing the tools telling me I must put an exception here
{
GC.KeepAlive(exception); // Just appeasing the tools telling me I am not using the exception, this is a noop.
// valueFactory did throw an exception
// We have the option to device a method to cache it and throw it in every calling thread
// However, I will be reverting to an uninitialized state
Volatile.Write(ref _status, 0);
// Reset event so that threads can wait for initialization
_event.Reset();
throw;
// Note: I know this is a weird configuration. Why is this catch not in the inner try?
// Because I want to make sure I set _status to 0 after the code from finally has run
// This way I am also sure that another thread cannot enter while finally is running, even if there was an exception
// In particular, I do not want Invalidate to call _event.Reset before we call _event.Set
}
return result;
}
// We didn't get the right to initialize
// Another thread managed to enter first
}
// Another thread is initializing the value
_event.Wait();
// Perhaps there was an exception during initialization
// We need to loop, if everything is ok, `ownerThread == null` and `Volatile.Read(ref _status) == 1`
// Otherwise, we would want a chance to try to initialize
}

T Create()
{
// calling valueFactory, it could throw
var created = valueFactory();
Volatile.Write(ref _value, new StrongBox<T>(created));
return created;
}
}

public T Get()
{
var ownerThread = Volatile.Read(ref _ownerThread);
if (ownerThread == Thread.CurrentThread)
{
// We have reentry, this is the same thread that is initializing on the value
// We could:
// 1. return default(T);
// 2. throw new LockRecursionException()
// We are doing the last one
throw new LockRecursionException();
}
// Note: It is ok to loop here. Since Threads will only ever change _ownerThread to themselves...
// there is no chance that _ownerThread suddently changes to the current thread unexpectedly
// Why we loop? See at the bottom of the loop.
while (true)
{
ownerThread = Volatile.Read(ref _ownerThread);
if (ownerThread == null)
{
// There is no thread initializing currently
// Either the value has already been initialized or not
// Check status to tell
if (Volatile.Read(ref _status) == 1)
{
// Value has already been initialized
var box = Volatile.Read(ref _value);
if (box == null)
{
// Another thread invalidated after we did read _status but before we got Value
continue;
}
return box.Value;
}
}
// Value is yet to be initialized, perhaps another thread is initializing the value
_event.Wait();
// Perhaps there was an exception during initialization
// We need to loop, if everything is ok, `ownerThread == null` and `Volatile.Read(ref _status) == 1`
// Otherwise, we keep waiting
}
}

public void Invalidate()
{
var ownerThread = Volatile.Read(ref _ownerThread);
if (ownerThread == null)
{
// There is no thread initializing currently
// Either the value has already been initialized or not
// Try to say it is not initialized
var found = Interlocked.CompareExchange(ref _status, 0, 1);
if (found == 1)
{
// We did set it to not intialized
// Now we have the responsability to notify the value is not set
_event.Reset();
// And the privilege to destroy the value >:v (because we are using StrongBox<T> this is atomic)
Volatile.Write(ref _value, null);
}
}
// Either:
// 1. Another thread is initializing the value. In this case we pretend we got here before that other thread did enter.
// 2. The value is yet to be initialized. In this case we have nothing to do.
// 3. Another thread managed to invalidate first. Let us call it a job done.
// 4. This thread did invalidate. Good job.
// 5. We have reentry
}

public bool TryGet(out T value)
{
var ownerThread = Volatile.Read(ref _ownerThread);
if (ownerThread == null)
{
// There is no thread initializing currently
// Either the value has already been initialized or not
// Check status to tell
if (Volatile.Read(ref _status) == 1)
{
// Value has already been initialized
var box = Volatile.Read(ref _value);
if (box != null)
{
value = box.Value;
return true;
}
}
}
// Either:
// 1. Another thread invalidated after we did read _status but before we got Value
// 2. Value is yet to be initialized
// 3. Another thread is initializing the value
// 4. We have reentry
value = default(T);
return false;
}
}


As you can see, this version is more complicated. It would be harder to maintain and harder to debug. Yet, it gives a greater level of control. For example, I have managed to give you a Get method that waits for another thread to initialize. I agree, this is a bad API… it would be better if it was Task.



I did try to implement it using Task<T> instead, however I could not get it right. I believe that it would only make sense with proper exception caching.



So, let us add exception caching and use Task<T>.



public class TaskLazy<T>
{
private Thread _ownerThread;
private TaskCompletionSource<T> _source;

public TaskLazy()
{
_source = new TaskCompletionSource<T>();
}

public Task<T> Get(Func<T> valueFactory)
{
if (valueFactory == null)
{
throw new ArgumentNullException(nameof(valueFactory));
}
var source = Volatile.Read(ref _source);
if (!source.Task.IsCompleted)
{
var ownerThread = Interlocked.CompareExchange(ref _ownerThread, Thread.CurrentThread, null);
if (ownerThread == Thread.CurrentThread)
{
return Task.Run(valueFactory);
}
if (ownerThread == null)
{
try
{
source.SetResult(valueFactory());
}
catch (Exception exception)
{
source.SetException(exception);
throw; // <-- you probably want to throw here, right?
}
finally
{
Volatile.Write(ref _ownerThread, null);
}
}
}
return source.Task;
}

public Task<T> Get()
{
var source = Volatile.Read(ref _source);
if (!source.Task.IsCompleted)
{
var ownerThread = Volatile.Read(ref _ownerThread);
if (ownerThread == Thread.CurrentThread)
{
throw new LockRecursionException();
}
}
return source.Task;
}

public void Invalidate()
{
if (Volatile.Read(ref _source).Task.IsCompleted)
{
Volatile.Write(ref _source, new TaskCompletionSource<T>());
}
}
}


Ah, beautiful. By removing the idea that if a valueFactory fails another one gets a chance to run, and replacing it with exception caching, we can remove looping, and this allow us to return Task<T>, and you get stuff like Wait and ContinueWith for free.





Addendum:



About volatile, have a read of:




  • The C# Memory Model in Theory and Practice, Part 2

  • Threading in C#, Part 4 (Advanced Threading)






share|improve this answer










New contributor




Theraot is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.














  • 1




    Comments are not for extended discussion; this conversation has been moved to the chatroom that was already created for the related discussion on another answer.
    – Vogel612
    22 hours ago




















up vote
38
down vote














is the current implementation 'safe'?




Absolutely not. The fact that you had to ask this question indicates that you do not understand enough about threading to build your own mechanisms like this. You need to have a deep and thorough understanding of the memory model to build these mechanisms. That is why you should always rely on the mechanisms provided for you in the framework, that were written by experts..



Why is it unsafe? Consider the following scenario. We have two threads, A and B. _Value is null and _Loaded is false.




  • We're on thread A.

  • The memory location of _Value is loaded into the processor cache for the CPU that thread A is affinitized to. It is null.

  • We switch to thread B.

  • Thread B reads _Loaded as false, takes the lock, checks _Loaded again, calls create, assigns _Value and _Loaded and leaves the lock.

  • We switch back to thread A.


  • _Loaded is now true, so thread A returns _Value from the processor cache, which is null.


Thread A is not required to invalidate the cache because thread A never takes a lock.!



Now, I made an argument here from processor caches. This is the wrong argument to make in general. Rather, what you must do when trying to build a new threading mechanism like this is to not reason about any specific processor architecture, but rather to reason about the abstract memory model of the C# language. C# permits reads and writes to move forwards and backwards in time in multithreaded programs. Any time travel that is not explicitly forbidden by the C# specification must be considered to be possible. Your task is to then write code that is correct for any possible combination of movements of reads and writes in time regardless of whether they are really possible on a specific processor or not.



Note that in particular the C# specification does not require that all threads observe a consistent set of write and read re-orderings. It is perfectly legal and possible for two threads to disagree on how a read was re-ordered with respect to a write.



If writing correct programs in a world where all reads and writes can be moved around in time sounds hard, that's because it is. I am not competent to do this work, and I do not attempt to. I leave it to experts.




are there any important performance considerations vs the original one?




Only you can answer that question. Answer performance questions by gathering real-world empirical data.



However, I can say a few things about this problem in general.



The first is: double-checked locking is intended to avoid the cost of the lock. Let's examine the assumptions underlying that intention. The assumption is that the cost of taking the lock is too high on the uncontended path. Is that assumption warranted? What is the cost of taking an uncontended lock? Did you measure it? Did you compare it against the cost of the lock-avoiding check? (Since the lock-avoiding check code is wrong, testing it for performance is not actually meaningful since we can always write faster code if we don't care about correctness, but still, we need to know whether this intervention is an improvement.) And most importantly, is the cost of taking an uncontended lock relevant to the consumer of this code? Because they are the stakeholder whose opinions are relevant; what do they say about the cost of an uncontended lock?



Let's suppose that the cost of an uncontended lock is relevant. Then surely the cost of a contended lock is enormously relevant. You've built a mechanism that potentially contends a lot of threads! What are the alternatives that you considered here? For example, you could avoid the lock altogether by deciding that it is OK for the create function to be called on multiple threads -- perhaps we know that it is cheap and idempotent. Those threads can then race to their heart's content to initialize the field, and we can use an interlocked exchange to ensure that we get a consistent value. That avoids the cost of the lock altogether, but it creates a different kind of cost, and puts a requirement on the caller to pass an idempotent creator.



Let's consider other aspects of your solution with respect to performance. You allocate the lock object regardless of whether you ever take the lock, and you keep it forever. What's the burden on the garbage collector? What is the impact on collection pressure? These things are all deeply relevant to performance. Again, remember, the assumption here is that we are so worried about the couple of nanoseconds it takes to enter and leave an uncontended lock that we're willing to write a double checked lock. If those nanoseconds are relevant then surely the milliseconds it takes to do an extra collection are incredibly relevant!




is there anything else I'm missing that I should be concerned about in a high traffic application?




I don't know how to answer that question.






share|improve this answer



















  • 11




    mhmm... this sounds sane but I'm still not sure how to apply this knowledge in a real-life scenario or simply to this question - maybe you could add some practical advice? Currently it's more like a rant on the OP for not knowing what they're doing ;-( I don't find this answer very helpful... sorry. It's not the same level as your other posts or blogs.
    – t3chb0t
    Nov 15 at 19:18








  • 8




    @t3chb0t: As for practical advice, I gave the best practical advice I know. Do not attempt to roll your own low-level mechanisms; use mechanisms that were built by experts.
    – Eric Lippert
    Nov 15 at 19:22






  • 6




    @EricLippert You can take code quality seriously and still frame your critique in such a way that doesn't make Dirk feel "burned". The first paragraph is especially condescending in my opinion.
    – CaTs
    Nov 16 at 6:33






  • 14




    @CaTs: That said, I note that you are responding to tone rather than to the factual content. That Dirk and I both lack the knowledge and ability to do this work correctly is a fact. That the best course of action is to use tools correctly that were built by people who are competent is also a fact. If stating those facts emphatically makes people uncomfortable, well, that's a price I'm willing to pay in order to clearly and unequivocally state that rolling your own threading mechanisms is incredibly dangerous and easy to get wrong.
    – Eric Lippert
    Nov 16 at 6:51






  • 7




    The purpose of StackOverflow is to create searchable artifacts - a library of questions and answers that may be useful to future programmers, and hopefully also useful to the original asker. If the CodeReview stack has the same goals, then this is an excellent answer. Considering that this is Eric Lippert (who served on the C# design team), adding a code snippet or other suggestion would have sent the wrong message to future programmers: "If I include this snippet / follow this pattern, then my homebrew thread management solution will be safe and work well - Eric Lippert said so."
    – benj2240
    Nov 16 at 15:22


















up vote
13
down vote














is there anything else I'm missing that I should be concerned about in a high traffic application?




Yes, your Lazy<T>.Value isn't generic anymore but an object and if Func<T> returns a value type then a lot of un/boxing will take place. This might hurt performance.



I think a LazyFactory.GetOrCreate<T>(...) would do a better job.






share|improve this answer





















  • Hi @t3chb0t, good point & thanks for your feedback. I could solve this with making the class itself actually (optionally) generic.
    – Dirk Boer
    Nov 15 at 19:08












  • About LazyFactory.GetOrCreate<T> wouldn't you still need to put your core logic into the constructor?
    – Dirk Boer
    Nov 15 at 19:08










  • @DirkBoer why is the generic parameter in the Get method rather than the type? It requires the class's consumers to behave well rather than allowing the compiler to enforce good behavior. Consider: string s = _SubEvents.Get(() => "s"); int fortyTwo = _SubEvents.Get(() => 42);
    – phoog
    Nov 15 at 20:15








  • 1




    @phoog It was directed at t3chb0t, but to your point about reducing verbosity, just changing the class declaration to MyLazy<T> and changing private object _Value; to private T _Value; wouldn't increase the verbosity, I don't think.
    – jpmc26
    Nov 15 at 21:58






  • 1




    @phoog If the class name is a major source of boilerplate, the OP surely has a severe problem with their naming techniques. Maybe they should consider namespaces or the aliasing form of using. =) I believe the question is referring to passing in the initialization function, which might be long and complex for legitimate reasons. The example usages of this class do also suggest the OP is passing in long lambdas.
    – jpmc26
    Nov 15 at 22:14




















up vote
10
down vote














Meaning that I have to put half of my logic concerning the lazy property in the constructor, and having more boilerplate code.




This is a little speculative, but I think you have an XY problem. You're trying to reduce boilerplate, but there are probably better ways to do that than what you've suggested.



If I understand correctly, your problem is that your classes look something like this:



public class MyClass
{
private Lazy<string> _MyStringValue;
// ...

public MyClass()
{
this._MyStringValue = new Lazy<string>(() => {
var builder = new StringBuilder();
builder.Append("a");
// 50 more lines of expensive construction
return builder.ToString();
});

// 100 more lines constructing OTHER lazy stuff
}
}


Gloss over the details of building up the value; it's just an example. The important point is that you have all this logic here deep in your constructor.



I think there are two things you can do to alleviate this problem:





  1. Parameterize



    Why put all this logic in the constructor? You're losing a lot of reusablity by doing that anyway. So make these things parameters and construct them elsewhere:



    public class MyClass
    {
    private Lazy<string> _MyStringValue;
    // ...

    public MyClass(Lazy<string> myStringValue)
    {
    this._MyStringValue = myStringValue;
    }
    }



  2. You can embed this construction logic in a method, and then pass the method to the Lazy constructor:



    class MyStringValueMaker
    {
    // Could be an instance method if that's more appropriate.
    // This is just for example
    public static string MakeValue()
    {
    var builder = new StringBuilder();
    builder.Append("a");
    // 50 more lines of expensive construction
    return builder.ToString();
    }
    }


    And then elsewhere:



    var myClass = new MyClass(new Lazy<string>(MyStringValueMaker.MakeValue));



Now suddenly everything is much better organized, more reusable, and simpler to understand.



If that's not what your class originally looked like, well, then I think you'd be better off posting a new question asking for a review on the original class to get ideas about how to improve.






share|improve this answer






























    up vote
    9
    down vote













    I like the idea, but you should carefully explain how this works in comments.



    Try this:



      MyLazy myLazy = new MyLazy();

    int value1 = myLazy.Get(() => 42);
    Console.WriteLine(value1);

    int value2 = myLazy.Get(() => 65);
    Console.WriteLine(value2);


    It correctly prints out:



    42
    42


    But even that we know the answer to everything is 42, it isn't that intuitive. The problem is obviously that you have to - or can - provide a creator function per call to Get<T>(Func<T> creator) and that it is arbitrary, but only the first actually has any effect.






    share|improve this answer



















    • 3




      The weird thing about this implementation is that the following compiles but fails with a runtime exception: MyLazy myLazy = new MyLazy(); int value1 = myLazy.Get(() => 42); Console.WriteLine(value1); string value2 = myLazy.Get(() => "sixty-five"); Console.WriteLine(value2);.
      – phoog
      Nov 15 at 21:00






    • 2




      @phoog: That is probably because Get(() => "sixty-five") gets resolved to Get<string> which then tries to do return (string)(object)42.
      – firda
      Nov 15 at 21:53








    • 2




      @firda yes, that is precisely why. My point with the example is to draw attention to the aspect of the design that is essentially an abuse of the generics system. Calling the property after the value has been initialized requires creating a delegate object that serves no purpose other than identifying the target type for the cast. The point of generics is so these checks can be done at compile time (which should save some nanoseconds at runtime), and that's not happening here.
      – phoog
      Nov 15 at 22:01




















    up vote
    6
    down vote














    is there anything else I'm missing that I should be concerned about in a high traffic application?




    By passing the delegate in the Get method, you're instantiating a delegate object each time you call the property. System.Lazy<T> creates the delegate instance only once.






    share|improve this answer




























      up vote
      6
      down vote














      is the current implementation 'safe'?




      No it isn't, because:




      1. You did not implement Double-checked locking correctly - you have two fields (_Value and _Loaded) instead of only one.

      2. You have added new feature - Invalidate - that invalides the correctness of double-checked locking even if you fix previous problem (by e.g. boxing the value).


      Lessons to learn




      1. Always prefer well-known implementations (e.g. System.Lazy<T> or System.Threading.LazyInitializer) over your own - thread/process synchronization and cryptography are two heaviest topics to master, do not expect that you will be able to design these things yourself in a day, it may take years to master!

      2. Benchmark/profile before you optimize - lock is often good enough and you can try e.g. System.Threading.SemaphoreSlim or ReaderWriterLockSlim to speed it up a bit, but beware that it could get even worse - so again: test and measure first, be clever if you need to, be lazy if you can.

      3. If you still wish to write your own version then at least remember:


        1. Reads and writes can be reordered in any way (unless they depend on each other like e.g. in assignment a = b + c - the order of fetching b and c is not guaranteed, but write to a has to be done after the computation). Be extra cautious when synchronization involves more than one variable! You will likely think that it works because you do things in some order, but that is wrong! The order is not guaranteed across threads!


        2. volatile only guarantess the order of writes, not that other threads would see them immediately. EDIT: As Voo pointed out, the documentation there (from Microsoft!) apears to be incomplete. Language specification (10.5.3) states that volatile reads have acquire semantics and volatile writes have release semantics. Personal note: how is one supposed to get this right, if you cannot even trust the documentation?! But I found Volatile.Read which gives very strong guarantees (in documentation): Returns: The value that was read. This value is the latest written by any processor in the computer, regardless of the number of processors or the state of processor cache. System.Threading.Interlocked have some good methods too. EDIT: Be warned that .NET prior 4.0 (yes 3.5 and older = engine 2.0 and older) is flawed and can reorder volatile reads even if it should not => do not trust volatile unless you are an expert (know for sure what are you doing or just experimenting, do not use volatile for production code, you have been warned, see the chat: https://chat.stackexchange.com/rooms/85938/lazyness).

        3. I am not an expert ;)






      EDIT: Seeing your third attempt I will try to give you my version, but I repeat: I am not an expert ;)



      public class MyLazy<T>
      {
      private class Box
      {
      public T Value { get; }
      public Box(T value) => Value = value;
      }

      private Box box;
      private Func<T> factory;
      private readonly object guard = new object();

      public MyLazy(Func<T> factory) => this.factory = factory;

      public T Value
      {
      get
      {
      var box = Volatile.Read(ref this.box);
      return box != null ? box.Value : GetOrCreate();
      }
      }

      private T GetOrCreate()
      {
      lock (guard)
      {
      var box = Volatile.Read(ref this.box);
      if (box != null) return box.Value;
      box = new Box(factory());
      Volatile.Write(ref this.box, box);
      return box.Value;
      }
      }

      public void Invalidate() => Volatile.Write(ref this.box, null);
      public void Invalidate(Func<T> factory) // bonus
      {
      lock (guard)
      {
      this.factory = factory;
      Volatile.Write(ref this.box, null);
      }
      }
      }




      Important comments before I react to Theraot's answer




      1. I did not use lock in Invalidate() because although you could potentially call it just before GetOrCreate calls its Volatile.Write in different thread, there is no real conflict, it behaves in such a way, that Invalidate() was either called before volatile read or after volatile write (behaves like before volatile read even if in-between). Read more about acquire/release to understand what I just wrote.

      2. It was not clear how many times Invalidate can be called (and the value re-created). I wrote it in such a way, that you can actually construct the value multiple-times, even changing the function (bonus Invalidate(Func...) ...already there before this edit). I was thinking that this could actually be very useful feature - to have set and lazy get either by allowing the factory to create different things and/or changing the factory as part of Invalidate. Maybe not inteded by OP, but I was certainly thinking about how I could actually use this.

      3. I took phoog's answer as granted - it really is not a good idea to create the delegate on every access. There are even more possible problems you can read about in our (my and phoog's) comments under Henrik's answer (The API binds the type with the call allowing Get(() => 42) and later Get(() => "surprise"))


      Reaction to Theraot's answer



      OP's second and (EDIT: fourth - yes contains a flaw not returning from ExecutionAndPublication) version appear correct to me, because of acquire/release semantics (thanks go to Voo as already mentioned earlier).



      Two important comments to Theraot:




      1. What is the demonstration trying to proove? You are calling Init twice without Invalidate in between, which means that you are writing to _data when _initialized is already true. I am also surprised that it does not fail in .NET Core, it should, because the failure is not because of volatile but because of double-init (overwriting _data when _initialized signals it is valid).

      2. As for volatile Framework vs. Core, I am not sure here, but my feeling about it all is that original description of volatile was that compiler(s) (IL+JIT) won't reorder reads/writes, because it was designed for Windows and x86 (strong memory model), but that it was later redesigned for ARM (weak memory model) changing the specification to acquire/release, which were originally implicit (x86's strong memory model). Original definition did not guarantee anything about CPU and cache, but was assumend and specs got upgraded later. I am not a language lawyer ;)


      Final note



      I am (probably) not going to update my code for single-init single-invalide scenairo and I am trully looking forward to somebody resolve the mess about volatile vs. .NET 1.1 vs. .NET 2.0+ vs. .NET Core or where it got changed. I myself am here to learn something (and help where I can).



      EDIT: We agreed that acquire/relese semantics hold, but only since .NET 4.0 (or 4.5?) ... see the chat: https://chat.stackexchange.com/rooms/85938/lazyness
      ... until proved otherwise. Use framework-provided API!



      Personal note: I am glad I have learned something here, that was the second time (since reflection cache - field/property reordering). Thank you all for being (what you are).






      share|improve this answer























      • @DirkBoer: And your second version isn't safe either, for reasons you may find in this answer (two variables + volatile + Invalidate).
        – firda
        Nov 16 at 17:32










      • I would change "I am not an expert" to "We are not experts".
        – jpmc26
        2 days ago










      • @jpmc26: "We" is too broad, "You and me are not experts, at least not yet" ... which reminds me of: "The more you know the more you know how much you don't know". Anyway, I just wanted to say: this list is in no way complete, you won't become an expert just by reading it ;) (and I may even be wrong in something). Some other links: SO: Volatile vs. Interlocked vs. lock and System.Threading.Interlocked
        – firda
        2 days ago










      • As we already discussed in the other comments, point 2 is very misleading: volatile guarantees a great deal more than just the order of writes. It gives visibility and ordering guarantees. Which also influences point 1 - volatile is essential to get the necessary ordering guarantees to make multi variable things work.. but you can make it work.
        – Voo
        2 days ago








      • 1




        Note about why I did not use lock in Invalidate(): you could potentially call it just before GetOrCreate calls its Volatile.Write in different thread, but there is no real conflict, it behaves in such a way, that Invalidate() was either called before volatile read or after volatile write (behaves like before volatile read even if in-between). But feel free to add the lock for extra safe, read more about acquire/release to understand what I just wrote.
        – firda
        yesterday


















      up vote
      5
      down vote













      Why not wrap a Lazy<T> and then lazy load the Lazy<T> in your Get



      public class MyLazy {
      private object lazy;
      private object _Lock = new object();

      public T Get<T>(Func<T> factory) {
      if (lazy == null) {
      lock (_Lock) {
      if (lazy == null) {
      lazy = new Lazy<T>(factory);
      }
      }
      }
      return ((Lazy<T>)lazy).Value;
      }
      }


      Taking advantage of existing features that have been tried and tested instead of trying to roll your own.






      share|improve this answer



















      • 12




        I heard you like Lazy so I put some Lazy in your Lazy... ;)
        – Pieter Witvoet
        Nov 15 at 13:02








      • 1




        If you took the ConcurrentDictionary then you wouldn't need the lock - it has the very convenient GetOrAdd method ;-]
        – t3chb0t
        Nov 15 at 13:34












      • @t3chb0t good point
        – Nkosi
        Nov 15 at 13:35






      • 8




        A static cache without an eviction policy is called a memory leak.
        – Johnbot
        Nov 15 at 13:42










      • @Johnbot true. I'll revert that suggestion out for the time being.
        – Nkosi
        Nov 15 at 13:43


















      up vote
      3
      down vote













      With the feedback that (my brain) could understand I've came to this for now.




      • (I think) I literally copied the locking structure of Lazy, thread-safe Singleton

      • Included adding the volatile keyword for the _Loaded check

      • Moved the generic definition to the class type. Adding a bit more boilerplate code on the advantage of more type safety and no-boxing

      • Added a warning to remind myself there might be issues


      As for the advice "Leave it to the smarter people". That's something I can't work with. I like to learn, I like other people to learn and I prefer a society where people are motivated to fail (against calculated cost) to learn for themselves.



      I appreciate that everyone has a different opinion about that, that's okay.



      I still not 100% sure if this solves at least the thread-safety problems of the first version, because the conversation went a bit off-topic imo. If anyone that is knowledgable can comment on that I would appreciate it. For the rest; I'm going to try to use this code and see what it does in production and if it causes (practical) problems for my caching of properties.



      /// <summary>
      /// Warning: might not be as performant (and safe?) as the Lazy<T>, see:
      /// https://codereview.stackexchange.com/questions/207708/own-implementation-of-lazyt-object
      /// </summary>
      public class MyLazy<T>
      {
      private T _Value;
      private volatile bool _Loaded;
      private object _Lock = new object();


      public T Get(Func<T> create)
      {
      if ( !_Loaded )
      {
      lock (_Lock)
      {
      if ( !_Loaded ) // double checked lock
      {
      _Value = create();
      _Loaded = true;
      }
      }
      }

      return _Value;
      }


      public void Invalidate()
      {
      lock ( _Lock )
      _Loaded = false;
      }
      }





      share|improve this answer



















      • 2




        As for the advice "Leave it to the smarter people". That's something I can't work with. I fully support this point! If it's so easy to do it the wrong way then perhaps there should be some documentation on absolute DONT'S and which parts of the language are dangerous if used the wrong way. Without this question and Eric's feedback many people whould probably never hear about the time travel that is not explicitly forbidden by the C# specification so it's definitely worth spreading (and improving the docs).
        – t3chb0t
        Nov 16 at 11:16








      • 2




        I think Eric's advice wasn't to stop trying to experiment and learn about advanced concepts but rather to rely on battle-hardened correct implementations from the framework where you can. If you'd like to learn more about some of this, I'd recommend this: albahari.com/threading/…
        – RobH
        Nov 16 at 15:22






      • 2




        @t3chb0t: with cryptography, the most important piece of advice is: don't roll your own. Not because anyone thinks you shouldn't be allowed to learn, but because it's a deceptively difficult subject, where it's easy to think you've done things right, but exceedingly difficult to actually get it right. I think the same can be said for concurrency (which is what Eric just did). It's not easy to get this wrong because it's poorly documented, but because it's a very difficult subject.
        – Pieter Witvoet
        Nov 16 at 15:31








      • 2




        @Adriano No your argumentation doesn't work. The compiler may not access _Value before it has read _Loaded since _Loaded is now volatile which forbids that reordering (note that Eric talks about the original code which does not use volatile). The problem here is the Invalidate though which at the very least causes a race condition which might lead to reading a partially initialized value. But if you remove the Invalidate the current implementation should be fine as far as I can see.
        – Voo
        2 days ago








      • 2




        @Voo: volatile: Adding the volatile modifier ensures that all threads will observe volatile writes performed by any other thread in the order in which they were performed. It does not ensure anything about any other variable/field (and it can be cached because cache-rows are big enough to hold both _Value and fields from some object that happens to be at lower address, which you can access and make _Value loaded to cache).
        – firda
        2 days ago




















      up vote
      1
      down vote













      I've done the same analysis for .NET Core version of Lazy and stripped out the following that do not suit my needs anyway in this case (the new Lazy(Func<T>) constructor)




      • Exception caching

      • all the other modes then ExecutionAndPublication

      • comments

      • Contract.Assert's

      • any other things like Debugger visualization, ToString() etc

      • Inlined some functions that stopped having any logic because of stripping the other things

      • I've reversed the order of methods to make it read more chronological


      Questions/notes that I have:





      • would anything change if _state will be changed from an object to a boolean in this stripped version? (in the original version it is used for alternative paths, like different publication modes or a cached exception) never mind - the state object is also being used as a lock. I expect that we can change it for object though? saving the seperate class.

      • is it allowed to inline some other methods, like ViaFactory?

      • it feels to me a bit strange that Value being called recursively the first time but that might have to do with other code that I stripped - i.e. the Exception caching.


      ..



      internal class LazyHelper
      {
      internal LazyHelper()
      {
      }
      }


      public class Lazy<T>
      {
      private volatile LazyHelper _state;
      private Func<T> _factory;
      private T _value;


      public Lazy(Func<T> valueFactory)
      {
      _factory = valueFactory;
      _state = new LazyHelper();
      }

      public T Value => _state == null ? _value : CreateValue();


      private T CreateValue()
      {
      var state = _state;
      if (state != null)
      {
      ExecutionAndPublication(state);
      }
      return Value;
      }


      private void ExecutionAndPublication(LazyHelper executionAndPublication)
      {
      lock (executionAndPublication)
      {
      ViaFactory();
      }
      }


      private void ViaFactory()
      {
      _value = _factory();
      _state = null; // volatile write, must occur after setting _value
      }
      }


      Not sure if it is allowed, but if I even inline more you really get a 'normal' double checked lock with a volatile. The main difference is that the locker object is also being used as a _loaded flag.



      public class Lazy<T>
      {
      private volatile object _state;
      private Func<T> _factory;
      private T _value;


      public Lazy(Func<T> valueFactory)
      {
      _factory = valueFactory;
      _state = new object();
      }

      public T Value => _state == null ? _value : CreateValue();


      private T CreateValue()
      {
      var state = _state;
      if (state != null)
      {
      lock (state)
      {
      if (ReferenceEquals(_state, state)) // seems to act as the second check
      {
      _value = _factory();
      _state = null;
      }
      }
      }
      return Value;
      }
      }





      share|improve this answer























      • 1. Remember there is not Invalidate, what I gave you has it. 2. volatile keywords may work the same as Volatile.Read and Volatile.Write, but the documentation for the former is better and it can also be converted to Visual Basic, so, I rather used the methods instead of the keyword. 3. .NET Core appears to free the lock after initialization (again: no Invalidate) and uses acquire/release semantics which are fences (no reordering past the point, but find that for your self for full explanation).
        – firda
        yesterday










      • T Value => _state ... has acquire semantics (Volatile.Read) which means that it cannot see the null until _state = null is performed in CreateValue with release semanics (Volatile.Write) and that no read can be reordered before the volatile read (value cannot be cached) and no write can be reordered after the volatile write (cannot be partially constructed).
        – firda
        yesterday


















      up vote
      0
      down vote













      I thought when I asked this question it was going to be about the passing of the delegate, but it is the double checked lock that seems to quite difficult to grasp.



      Not stopping my curiousity I took the .NET source of Lazy and stripped out the following that do not suit my needs anyway in this case:




      • Exception caching

      • all the other modes then ExecutionAndPublication

      • comments

      • Contract.Assert's

      • any other things like Debugger visualization, ToString() etc

      • Inlined some functions that stopped having any logic because of stripping the other things


      This is the result until now of the stripped Lazy. Be aware that I tried to do my best in not changing anything that might change behaviour, and hopefully didn't. If anyone else has updates that would be more then welcome.



      Note that I did this for study and learning and not to put it in production.



      Some questions that I still have to further simplify:





      • can in this case the try/finally and Monitor.Enter/Exit and the lockTaken be simplified to a lock {} statement? If I follow Eric's answer here that would be a yes, but maybe I'm overlooking something: https://stackoverflow.com/a/2837224/647845 Probably not, because the if statement.

      • can the sentinel be replaced by a boolean and a null assignment for the factory?

      • when you assume the logic is good, can the whole CreateValue() be replaced by an inline new Boxed(factory())?


      -



      #if !FEATURE_CORECLR
      [HostProtection(Synchronization = true, ExternalThreading = true)]
      #endif
      public class Lazy2<T>
      {
      class Boxed
      {
      internal Boxed(T value)
      {
      m_value = value;
      }
      internal T m_value;
      }


      static readonly Func<T> ALREADY_INVOKED_SENTINEL = delegate
      {
      return default(T);
      };

      private Boxed m_boxed;
      private Func<T> m_valueFactory;
      private object m_threadSafeObj;


      public Lazy2(Func<T> valueFactory)
      {
      m_threadSafeObj = new object();
      m_valueFactory = valueFactory;
      }


      public T Value
      {
      get
      {
      Boxed boxed = null;
      if (m_boxed != null )
      {
      boxed = m_boxed;
      if (boxed != null)
      {
      return boxed.m_value;
      }
      }

      return LazyInitValue();
      }
      }

      private T LazyInitValue()
      {
      Boxed boxed = null;

      object threadSafeObj = Volatile.Read(ref m_threadSafeObj);
      bool lockTaken = false;

      try
      {
      if (threadSafeObj != (object)ALREADY_INVOKED_SENTINEL)
      Monitor.Enter(threadSafeObj, ref lockTaken);

      if (m_boxed == null)
      {
      boxed = CreateValue();
      m_boxed = boxed;
      Volatile.Write(ref m_threadSafeObj, ALREADY_INVOKED_SENTINEL);
      }
      boxed = m_boxed;
      }
      finally
      {
      if (lockTaken)
      Monitor.Exit(threadSafeObj);
      }

      return boxed.m_value;
      }

      private Boxed CreateValue()
      {
      Boxed boxed = null;

      Func<T> factory = m_valueFactory;
      m_valueFactory = ALREADY_INVOKED_SENTINEL;

      boxed = new Boxed(factory());

      return boxed;
      }
      }





      share|improve this answer























      • Do you still plan to add Invalidate() => Volatile.Write(ref m_boxed, null)? I would rather use Boxed boxed = Volatile.Read(ref m_boxed), make m_threadSafeObj readonly (use lock and remove the sentinel). But remember I am not an expert, just interested ;)
        – firda
        2 days ago












      • Hi @firda, nothing planned yet :) just was interested to break everything down to the core. For my own version it would be useful if I have an Invalidate(), but I first need to have better understanding of all the moving parts.
        – Dirk Boer
        2 days ago










      • This is it. ONE volatile variable, not two.
        – Adriano Repetti
        2 days ago








      • 1




        @Voo: There is no volatile in Reference Soruce either and there is no Invalidate in this version, so, the logic appears to be: 1. m_boxed can never be assigned until the Boxed is fully constructed (it is a class, not a struct) and 2. even if the effect is not immediately visible to all threads, they will just take the lock to find out that it was already created.
        – firda
        yesterday






      • 2




        It would be better to post a follow-up question instead of multiple versions as answers with questions in it.
        – Heslacher
        9 hours ago











      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ifUsing("editor", function () {
      StackExchange.using("externalEditor", function () {
      StackExchange.using("snippets", function () {
      StackExchange.snippets.init();
      });
      });
      }, "code-snippets");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "196"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














       

      draft saved


      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f207708%2fown-implementation-of-lazyt-object%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      11 Answers
      11






      active

      oldest

      votes








      11 Answers
      11






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      15
      down vote



      accepted










      Having backported Lazy to .NET 2.0, I want to offer a variant...



      First of all, this looks a lot like LazyInitializer. If that works for you, then there is no need to go about creating a Lazy<T> variant.



      With that said, let's continue...





      Before going into the code, let us see some things that may go wrong:




      • The ABA problem

      • Thread visibility/Code reordering

      • Double initialization

      • Value factory recursion

      • Failed initialization

      • Preventing Garbage Collection


      The code on your question will fail on visibility. Other threads will not be aware that you changed _Loaded. In fact, it can fall victim to the value of _Loaded being optimized, so that it is only read once per method call. It will be problematic to have create call Get<T>, it would overwrite _Value... meaning that _Value can change after initialized.



      Also, execution order could be reordered to set _Loaded before calling create.





      The second version:



      volatile is not enough guarantee here. While you have _Loaded volatile, changes to _Value could not be visible. Thus, for another thread, it could reach the object in an state where it claims to be loaded but _Value appears unchanged. In fact, it could appear partially changed (torn read).



      Note: A caveat, given that I am used to target multiple versions of .NET, I consider wrong something that would fail in any of them. And that includes .NET 2.0, .NET 3.0 and .NET 3.5. Which, have broken volatile semantics. This makes me extra careful around volatile.



      We still have to worry about create calling Get.



      I am glad to see T moved to the class level.





      The third version:



      The initialization has been moved to the constructor. I think this defies the purpose, if you can have it in the constructor... why don't you use Lazy<T>?



      The partially changed problem is fixed by using Boxed. You could have used StrongBox<T> here.



      I see you inherited some weirdness from the reference source... The reason they check Boxed that weirdly (also the reason why they do not use StrongBox<T>) is because it might not contain a value, but a cached Exception.



      In your previous version, when create throws, Get throws, and another thread gets a chance to initialize. However, it is possible to initialize Lazy<T> in such way that it stores the exception and throws it for every thread that tries to read it (see LazyThreadSafetyMode).



      Aside from that, the use of Volatile fixed the other visibility and reordering problems.



      Note: When factory throws, m_valueFactory has already been set to the sentinel. Meaning that next thread to call will run the sentinel, resulting in the Value being initialized to default(T). This is probably not what you want.






      • Can it be simplified to a lock?




        No. As you have already guessed.





      • Can the sentinel be replaced by a boolean and a null assignment for the factory?




        Depending on what you want (see note above), yes. The reference source has it the way it has it because of LazyThreadSafetyMode.





      • When you assume the logic is good, can the whole CreateValue() be replaced by an inline new Boxed(factory())?




        No. You could have boxed = new Boxed(m_valueFactory());. However, you are still holding a reference to m_valueFactory, which might prevent garbage collection. Also, in case of recursion, this would be an StackOverflowException. You may set m_valueFactory to null... However, because of the possibility of recursion, that could lead to a NullReferenceException. Thus, we need to set it to a sentinel or set it to null and add a null check. Either way, CreateValue is not a one liner.






      The fourth version:



      Be careful with the writes to _value, they might not be atomic when it is a value type. There is a good reason why the prior version had boxed: another thread could see a partially initialized value (torn reads). This is safe in .NET Core, but not in .NET Framework.



      Note: This code actually works well in modern .NET, in particular in .NET Core.



      I would fix this by using StrongBox<T>. Something similar to firda's answer.



      About firda’s answer… It holds onto valueFactory, keeping that references after initialization keeps alive whatever it points to, preventing the garbage collector from taking it. This is by design. It might not be what you want. One possible path to modify it is to set factory to null at the end of the lock in GetOrCreate (before returning, of course).



      Back to the problem of torn reads, consider this demonstration:



      public struct Long8
      {
      public long D1;
      public long D2;
      public long D3;
      public long D4;
      public long D5;
      public long D6;
      public long D7;
      public long D8;

      public long Val => D1 + D8;
      }

      public class Program
      {
      public static void Main()
      {
      // We are using Lonh8 because writing to it would not be atomic
      // Here DataInit is poor man's lazy
      var dataInit = new DataInit<Long8>();
      // value will hold the values that the reader gets
      var value = default(Long8);
      // we use done to signal the writer to stop
      var done = 0;
      var reader = Task.Run
      (
      () =>
      {
      // We are reading the values set by the writer
      // We expect Val to be 1 because
      // the writer does not write something that would give a different value
      var spinWait = new SpinWait();
      do
      {
      // This loop is here to wait for the writer to set a value since the last call to invalidate
      while (!dataInit.TryGet(out value))
      {
      spinWait.SpinOnce();
      }
      // dataInit.Invalidate();
      // Console.Write(".");
      spinWait.SpinOnce();
      } while (value.Val == 1);
      // Console.WriteLine();
      }
      );
      var writer = Task.Run
      (
      () =>
      {
      // Here we will write { D1 = 1, D8 = 0 } and { D1 = 0, D8 = 1 }
      // In either case Val should be 1
      var spinWait = new SpinWait();
      while (Volatile.Read(ref done) == 0)
      {
      dataInit.Init(new Long8 { D1 = 1, D8 = 0 });
      spinWait.SpinOnce();
      dataInit.Invalidate();
      dataInit.Init(new Long8 { D1 = 0, D8 = 1 });
      spinWait.SpinOnce();
      dataInit.Invalidate();
      }
      }
      );
      reader.Wait();
      System.Diagnostics.Debugger.Break();
      Volatile.Write(ref done, 1);
      writer.Wait();
      Console.WriteLine(value.Val);
      }

      public class DataInit<T>
      {
      private T _data;
      private volatile bool _initialized;

      public void Init(T value)
      {
      // So, we write to a generic field
      // And then to a volatile bool to notify that we did write to the field
      // Yet, when T is a value type larger than the word size of the CPU
      // Another thread may find that _initialized = true but _data is not what we would expect
      _data = value; // Write 1 <----------------
      _initialized = true; // Write 2 (volatile) <------
      }

      public void Invalidate()
      {
      _initialized = false;
      }

      public bool TryGet(out T value)
      {
      // Here we are reading the volatile before we read the other field...
      // Does it make any difference?
      if (_initialized)
      {
      value = _data;
      if (_initialized)
      {
      return true;
      }
      }
      value = default(T);
      return false;
      }
      }
      }


      This code demonstrates a torn read. In either .NET Framework and .NET Core. What is shows is that can read _data in between _initialised = false; _data = value; _initialised = true; resulting is something partially set. To be more specific: The reader reads that the value is initialized and start reading, meanwhile the other thread is modifying the value. Your code does not suffer this particular fate as long as it does not have Invalidate (as firda mentions on the comments). This is something to be careful with.



      This is based Reproduce torn reads of decimal in c# with some modifications introduced by VisualMelon, this is for demonstration purposes only, this is not something you want in production.



      Also consider the cases where valueFactory read Value and where valueFactory throws.






      • I expect that we can change it for object though? saving the seperate class.




        Yes, you can use object instead of LazyHelper.





      • is it allowed to inline some other methods, like ViaFactory?




        Yes, in fact, the compiler may do this optimization.





      • it feels to me a bit strange that Value being called recursively the first time but that might have to do with other code that I stripped - i.e. the Exception caching.




        I am not sure why they went with this design in corefx. However - at least for your code - it is a better design to have ViaFactory and ExecutionAndPublication return than to read the field again.





      • Not sure if it is allowed, but if I even inline more you really get a 'normal' double checked lock with a volatile. The main difference is that the locker object is also being used as a _loaded flag.




        Yeah, sure. The first thread will set _state = null; and then the second thread gets to enter the lock, but you do not want it to call _factory again. So, you check if it is still what you set it to (it isn't, it is null) and that allows to correctly decide to not enter the if.






      Alright, now I get to write one...



      The requirements I will take are:




      • To be thread-safe

      • To allow lazy initialization

      • To allow invalidation

      • To pass the valueFactory on Get


      I will also add a TryGet method, for those threads that want to get the value if it is initialized but do not want to attempt to initialize it.



      First I have made one based on firda's:



      public class MyLazy<T>
      {
      private readonly object _syncroot = new object();
      private StrongBox<T> _value;

      public T Get(Func<T> valueFactory)
      {
      if (valueFactory == null)
      {
      throw new ArgumentNullException(nameof(valueFactory));
      }
      var box = Volatile.Read(ref _value);
      return box != null ? box.Value : GetOrCreate();

      T GetOrCreate()
      {
      lock (_syncroot)
      {
      box = Volatile.Read(ref _value);
      if (box != null)
      {
      return box.Value;
      }
      box = new StrongBox<T>(valueFactory());
      Volatile.Write(ref _value, box);
      return box.Value;
      }
      }
      }

      public void Invalidate()
      {
      Volatile.Write(ref _value, null);
      }

      public bool TryGet(out T value)
      {
      var box = Volatile.Read(ref _value);
      if (box != null)
      {
      value = box.Value;
      return true;
      }
      value = default(T);
      return false;
      }
      }


      Let us be clear, firda's is a good solution. This one mainly differs in that I am not passing valueFactory to the constructor (I am also using StrongBox<T> because there is no reason to not use the provided API). As you mention in the question "95% of the cases" you can have valueFactory in the constructor, this is for the rest of the cases.



      Notice that when Invalidate is called, the next thread to enter will run the valueFactory.



      And for the kicks I have done one without monitor (lock). This version uses ManualResetEventSlim instead. This means that it can awake threads all at once instead of one by one. Which would be good for the situation where there are many threads running in parallel. However, it requires extra work to work correctly. For starters, if valueFactory throws, we got to awake the threads, have one of them try to initialize, and have the rest go to wait again… do this until they all fail or one succeeds, meaning that we need to loop. Furthermore, with lock we get reentry handled for us automatically, with ManualResetEventSlim we got to handle that.



      Also, I recommend to have a look at Interlocked.CompareExchange.



      public class Lazy<T>
      {
      private readonly ManualResetEventSlim _event;
      private Thread _ownerThread;
      private int _status;
      private StrongBox<T> _value;

      public Lazy()
      {
      _event = new ManualResetEventSlim(false);
      }

      public T Get(Func<T> valueFactory)
      {
      // _status == 0 :: Not Initialized
      // _status == 1 :: Initializing or Initialized
      // _ownerThread == null :: Not Initialized or Initialized
      // _ownerThread != null :: Initializing
      // _value == null :: Not Initialized or Initializing
      // _value != null :: Initialized
      if (valueFactory == null)
      {
      throw new ArgumentNullException(nameof(valueFactory));
      }
      var ownerThread = Volatile.Read(ref _ownerThread);
      if (ownerThread == Thread.CurrentThread)
      {
      // We have reentry, this is the same thread that is initializing on the value
      // We could:
      // 1. return default(T);
      // 2. throw new LockRecursionException()
      // 3. let it in
      // We are doing that last one
      return Create();
      }
      // Note: It is ok to loop here. Since Threads will only ever change _ownerThread to themselves...
      // there is no chance that _ownerThread suddently changes to the current thread unexpectedly
      // Why we loop? See at the bottom of the loop.
      while (true)
      {
      ownerThread = Volatile.Read(ref _ownerThread);
      if (ownerThread == null)
      {
      // There is no thread initializing currently
      // Either the value has already been initialized or not
      // Check status to tell
      if (Volatile.Read(ref _status) == 1)
      {
      // Value has already been initialized
      var box = Volatile.Read(ref _value);
      if (box == null)
      {
      // Another thread invalidated after we did read _status but before we got Value
      continue;
      }
      return box.Value;
      }
      // Value is yet to be initialized
      // Try to get the right to initialize, Interlocked.CompareExchange only one thread gets in
      var found = Interlocked.CompareExchange(ref _status, 1, 0);
      if (found == 0)
      {
      // We got the right to initialize
      var result = default(T);
      try
      {
      try
      {
      Volatile.Write(ref _ownerThread, Thread.CurrentThread);
      result = Create();
      }
      finally
      {
      // Other threads could be waiting. We need to let them advance.
      _event.Set();
      // We want to make sure _ownerThread is null so another thread can enter
      Volatile.Write(ref _ownerThread, null);
      }
      }
      catch (Exception exception) // Just appeasing the tools telling me I must put an exception here
      {
      GC.KeepAlive(exception); // Just appeasing the tools telling me I am not using the exception, this is a noop.
      // valueFactory did throw an exception
      // We have the option to device a method to cache it and throw it in every calling thread
      // However, I will be reverting to an uninitialized state
      Volatile.Write(ref _status, 0);
      // Reset event so that threads can wait for initialization
      _event.Reset();
      throw;
      // Note: I know this is a weird configuration. Why is this catch not in the inner try?
      // Because I want to make sure I set _status to 0 after the code from finally has run
      // This way I am also sure that another thread cannot enter while finally is running, even if there was an exception
      // In particular, I do not want Invalidate to call _event.Reset before we call _event.Set
      }
      return result;
      }
      // We didn't get the right to initialize
      // Another thread managed to enter first
      }
      // Another thread is initializing the value
      _event.Wait();
      // Perhaps there was an exception during initialization
      // We need to loop, if everything is ok, `ownerThread == null` and `Volatile.Read(ref _status) == 1`
      // Otherwise, we would want a chance to try to initialize
      }

      T Create()
      {
      // calling valueFactory, it could throw
      var created = valueFactory();
      Volatile.Write(ref _value, new StrongBox<T>(created));
      return created;
      }
      }

      public T Get()
      {
      var ownerThread = Volatile.Read(ref _ownerThread);
      if (ownerThread == Thread.CurrentThread)
      {
      // We have reentry, this is the same thread that is initializing on the value
      // We could:
      // 1. return default(T);
      // 2. throw new LockRecursionException()
      // We are doing the last one
      throw new LockRecursionException();
      }
      // Note: It is ok to loop here. Since Threads will only ever change _ownerThread to themselves...
      // there is no chance that _ownerThread suddently changes to the current thread unexpectedly
      // Why we loop? See at the bottom of the loop.
      while (true)
      {
      ownerThread = Volatile.Read(ref _ownerThread);
      if (ownerThread == null)
      {
      // There is no thread initializing currently
      // Either the value has already been initialized or not
      // Check status to tell
      if (Volatile.Read(ref _status) == 1)
      {
      // Value has already been initialized
      var box = Volatile.Read(ref _value);
      if (box == null)
      {
      // Another thread invalidated after we did read _status but before we got Value
      continue;
      }
      return box.Value;
      }
      }
      // Value is yet to be initialized, perhaps another thread is initializing the value
      _event.Wait();
      // Perhaps there was an exception during initialization
      // We need to loop, if everything is ok, `ownerThread == null` and `Volatile.Read(ref _status) == 1`
      // Otherwise, we keep waiting
      }
      }

      public void Invalidate()
      {
      var ownerThread = Volatile.Read(ref _ownerThread);
      if (ownerThread == null)
      {
      // There is no thread initializing currently
      // Either the value has already been initialized or not
      // Try to say it is not initialized
      var found = Interlocked.CompareExchange(ref _status, 0, 1);
      if (found == 1)
      {
      // We did set it to not intialized
      // Now we have the responsability to notify the value is not set
      _event.Reset();
      // And the privilege to destroy the value >:v (because we are using StrongBox<T> this is atomic)
      Volatile.Write(ref _value, null);
      }
      }
      // Either:
      // 1. Another thread is initializing the value. In this case we pretend we got here before that other thread did enter.
      // 2. The value is yet to be initialized. In this case we have nothing to do.
      // 3. Another thread managed to invalidate first. Let us call it a job done.
      // 4. This thread did invalidate. Good job.
      // 5. We have reentry
      }

      public bool TryGet(out T value)
      {
      var ownerThread = Volatile.Read(ref _ownerThread);
      if (ownerThread == null)
      {
      // There is no thread initializing currently
      // Either the value has already been initialized or not
      // Check status to tell
      if (Volatile.Read(ref _status) == 1)
      {
      // Value has already been initialized
      var box = Volatile.Read(ref _value);
      if (box != null)
      {
      value = box.Value;
      return true;
      }
      }
      }
      // Either:
      // 1. Another thread invalidated after we did read _status but before we got Value
      // 2. Value is yet to be initialized
      // 3. Another thread is initializing the value
      // 4. We have reentry
      value = default(T);
      return false;
      }
      }


      As you can see, this version is more complicated. It would be harder to maintain and harder to debug. Yet, it gives a greater level of control. For example, I have managed to give you a Get method that waits for another thread to initialize. I agree, this is a bad API… it would be better if it was Task.



      I did try to implement it using Task<T> instead, however I could not get it right. I believe that it would only make sense with proper exception caching.



      So, let us add exception caching and use Task<T>.



      public class TaskLazy<T>
      {
      private Thread _ownerThread;
      private TaskCompletionSource<T> _source;

      public TaskLazy()
      {
      _source = new TaskCompletionSource<T>();
      }

      public Task<T> Get(Func<T> valueFactory)
      {
      if (valueFactory == null)
      {
      throw new ArgumentNullException(nameof(valueFactory));
      }
      var source = Volatile.Read(ref _source);
      if (!source.Task.IsCompleted)
      {
      var ownerThread = Interlocked.CompareExchange(ref _ownerThread, Thread.CurrentThread, null);
      if (ownerThread == Thread.CurrentThread)
      {
      return Task.Run(valueFactory);
      }
      if (ownerThread == null)
      {
      try
      {
      source.SetResult(valueFactory());
      }
      catch (Exception exception)
      {
      source.SetException(exception);
      throw; // <-- you probably want to throw here, right?
      }
      finally
      {
      Volatile.Write(ref _ownerThread, null);
      }
      }
      }
      return source.Task;
      }

      public Task<T> Get()
      {
      var source = Volatile.Read(ref _source);
      if (!source.Task.IsCompleted)
      {
      var ownerThread = Volatile.Read(ref _ownerThread);
      if (ownerThread == Thread.CurrentThread)
      {
      throw new LockRecursionException();
      }
      }
      return source.Task;
      }

      public void Invalidate()
      {
      if (Volatile.Read(ref _source).Task.IsCompleted)
      {
      Volatile.Write(ref _source, new TaskCompletionSource<T>());
      }
      }
      }


      Ah, beautiful. By removing the idea that if a valueFactory fails another one gets a chance to run, and replacing it with exception caching, we can remove looping, and this allow us to return Task<T>, and you get stuff like Wait and ContinueWith for free.





      Addendum:



      About volatile, have a read of:




      • The C# Memory Model in Theory and Practice, Part 2

      • Threading in C#, Part 4 (Advanced Threading)






      share|improve this answer










      New contributor




      Theraot is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.














      • 1




        Comments are not for extended discussion; this conversation has been moved to the chatroom that was already created for the related discussion on another answer.
        – Vogel612
        22 hours ago

















      up vote
      15
      down vote



      accepted










      Having backported Lazy to .NET 2.0, I want to offer a variant...



      First of all, this looks a lot like LazyInitializer. If that works for you, then there is no need to go about creating a Lazy<T> variant.



      With that said, let's continue...





      Before going into the code, let us see some things that may go wrong:




      • The ABA problem

      • Thread visibility/Code reordering

      • Double initialization

      • Value factory recursion

      • Failed initialization

      • Preventing Garbage Collection


      The code on your question will fail on visibility. Other threads will not be aware that you changed _Loaded. In fact, it can fall victim to the value of _Loaded being optimized, so that it is only read once per method call. It will be problematic to have create call Get<T>, it would overwrite _Value... meaning that _Value can change after initialized.



      Also, execution order could be reordered to set _Loaded before calling create.





      The second version:



      volatile is not enough guarantee here. While you have _Loaded volatile, changes to _Value could not be visible. Thus, for another thread, it could reach the object in an state where it claims to be loaded but _Value appears unchanged. In fact, it could appear partially changed (torn read).



      Note: A caveat, given that I am used to target multiple versions of .NET, I consider wrong something that would fail in any of them. And that includes .NET 2.0, .NET 3.0 and .NET 3.5. Which, have broken volatile semantics. This makes me extra careful around volatile.



      We still have to worry about create calling Get.



      I am glad to see T moved to the class level.





      The third version:



      The initialization has been moved to the constructor. I think this defies the purpose, if you can have it in the constructor... why don't you use Lazy<T>?



      The partially changed problem is fixed by using Boxed. You could have used StrongBox<T> here.



      I see you inherited some weirdness from the reference source... The reason they check Boxed that weirdly (also the reason why they do not use StrongBox<T>) is because it might not contain a value, but a cached Exception.



      In your previous version, when create throws, Get throws, and another thread gets a chance to initialize. However, it is possible to initialize Lazy<T> in such way that it stores the exception and throws it for every thread that tries to read it (see LazyThreadSafetyMode).



      Aside from that, the use of Volatile fixed the other visibility and reordering problems.



      Note: When factory throws, m_valueFactory has already been set to the sentinel. Meaning that next thread to call will run the sentinel, resulting in the Value being initialized to default(T). This is probably not what you want.






      • Can it be simplified to a lock?




        No. As you have already guessed.





      • Can the sentinel be replaced by a boolean and a null assignment for the factory?




        Depending on what you want (see note above), yes. The reference source has it the way it has it because of LazyThreadSafetyMode.





      • When you assume the logic is good, can the whole CreateValue() be replaced by an inline new Boxed(factory())?




        No. You could have boxed = new Boxed(m_valueFactory());. However, you are still holding a reference to m_valueFactory, which might prevent garbage collection. Also, in case of recursion, this would be an StackOverflowException. You may set m_valueFactory to null... However, because of the possibility of recursion, that could lead to a NullReferenceException. Thus, we need to set it to a sentinel or set it to null and add a null check. Either way, CreateValue is not a one liner.






      The fourth version:



      Be careful with the writes to _value, they might not be atomic when it is a value type. There is a good reason why the prior version had boxed: another thread could see a partially initialized value (torn reads). This is safe in .NET Core, but not in .NET Framework.



      Note: This code actually works well in modern .NET, in particular in .NET Core.



      I would fix this by using StrongBox<T>. Something similar to firda's answer.



      About firda’s answer… It holds onto valueFactory, keeping that references after initialization keeps alive whatever it points to, preventing the garbage collector from taking it. This is by design. It might not be what you want. One possible path to modify it is to set factory to null at the end of the lock in GetOrCreate (before returning, of course).



      Back to the problem of torn reads, consider this demonstration:



      public struct Long8
      {
      public long D1;
      public long D2;
      public long D3;
      public long D4;
      public long D5;
      public long D6;
      public long D7;
      public long D8;

      public long Val => D1 + D8;
      }

      public class Program
      {
      public static void Main()
      {
      // We are using Lonh8 because writing to it would not be atomic
      // Here DataInit is poor man's lazy
      var dataInit = new DataInit<Long8>();
      // value will hold the values that the reader gets
      var value = default(Long8);
      // we use done to signal the writer to stop
      var done = 0;
      var reader = Task.Run
      (
      () =>
      {
      // We are reading the values set by the writer
      // We expect Val to be 1 because
      // the writer does not write something that would give a different value
      var spinWait = new SpinWait();
      do
      {
      // This loop is here to wait for the writer to set a value since the last call to invalidate
      while (!dataInit.TryGet(out value))
      {
      spinWait.SpinOnce();
      }
      // dataInit.Invalidate();
      // Console.Write(".");
      spinWait.SpinOnce();
      } while (value.Val == 1);
      // Console.WriteLine();
      }
      );
      var writer = Task.Run
      (
      () =>
      {
      // Here we will write { D1 = 1, D8 = 0 } and { D1 = 0, D8 = 1 }
      // In either case Val should be 1
      var spinWait = new SpinWait();
      while (Volatile.Read(ref done) == 0)
      {
      dataInit.Init(new Long8 { D1 = 1, D8 = 0 });
      spinWait.SpinOnce();
      dataInit.Invalidate();
      dataInit.Init(new Long8 { D1 = 0, D8 = 1 });
      spinWait.SpinOnce();
      dataInit.Invalidate();
      }
      }
      );
      reader.Wait();
      System.Diagnostics.Debugger.Break();
      Volatile.Write(ref done, 1);
      writer.Wait();
      Console.WriteLine(value.Val);
      }

      public class DataInit<T>
      {
      private T _data;
      private volatile bool _initialized;

      public void Init(T value)
      {
      // So, we write to a generic field
      // And then to a volatile bool to notify that we did write to the field
      // Yet, when T is a value type larger than the word size of the CPU
      // Another thread may find that _initialized = true but _data is not what we would expect
      _data = value; // Write 1 <----------------
      _initialized = true; // Write 2 (volatile) <------
      }

      public void Invalidate()
      {
      _initialized = false;
      }

      public bool TryGet(out T value)
      {
      // Here we are reading the volatile before we read the other field...
      // Does it make any difference?
      if (_initialized)
      {
      value = _data;
      if (_initialized)
      {
      return true;
      }
      }
      value = default(T);
      return false;
      }
      }
      }


      This code demonstrates a torn read. In either .NET Framework and .NET Core. What is shows is that can read _data in between _initialised = false; _data = value; _initialised = true; resulting is something partially set. To be more specific: The reader reads that the value is initialized and start reading, meanwhile the other thread is modifying the value. Your code does not suffer this particular fate as long as it does not have Invalidate (as firda mentions on the comments). This is something to be careful with.



      This is based Reproduce torn reads of decimal in c# with some modifications introduced by VisualMelon, this is for demonstration purposes only, this is not something you want in production.



      Also consider the cases where valueFactory read Value and where valueFactory throws.






      • I expect that we can change it for object though? saving the seperate class.




        Yes, you can use object instead of LazyHelper.





      • is it allowed to inline some other methods, like ViaFactory?




        Yes, in fact, the compiler may do this optimization.





      • it feels to me a bit strange that Value being called recursively the first time but that might have to do with other code that I stripped - i.e. the Exception caching.




        I am not sure why they went with this design in corefx. However - at least for your code - it is a better design to have ViaFactory and ExecutionAndPublication return than to read the field again.





      • Not sure if it is allowed, but if I even inline more you really get a 'normal' double checked lock with a volatile. The main difference is that the locker object is also being used as a _loaded flag.




        Yeah, sure. The first thread will set _state = null; and then the second thread gets to enter the lock, but you do not want it to call _factory again. So, you check if it is still what you set it to (it isn't, it is null) and that allows to correctly decide to not enter the if.






      Alright, now I get to write one...



      The requirements I will take are:




      • To be thread-safe

      • To allow lazy initialization

      • To allow invalidation

      • To pass the valueFactory on Get


      I will also add a TryGet method, for those threads that want to get the value if it is initialized but do not want to attempt to initialize it.



      First I have made one based on firda's:



      public class MyLazy<T>
      {
      private readonly object _syncroot = new object();
      private StrongBox<T> _value;

      public T Get(Func<T> valueFactory)
      {
      if (valueFactory == null)
      {
      throw new ArgumentNullException(nameof(valueFactory));
      }
      var box = Volatile.Read(ref _value);
      return box != null ? box.Value : GetOrCreate();

      T GetOrCreate()
      {
      lock (_syncroot)
      {
      box = Volatile.Read(ref _value);
      if (box != null)
      {
      return box.Value;
      }
      box = new StrongBox<T>(valueFactory());
      Volatile.Write(ref _value, box);
      return box.Value;
      }
      }
      }

      public void Invalidate()
      {
      Volatile.Write(ref _value, null);
      }

      public bool TryGet(out T value)
      {
      var box = Volatile.Read(ref _value);
      if (box != null)
      {
      value = box.Value;
      return true;
      }
      value = default(T);
      return false;
      }
      }


      Let us be clear, firda's is a good solution. This one mainly differs in that I am not passing valueFactory to the constructor (I am also using StrongBox<T> because there is no reason to not use the provided API). As you mention in the question "95% of the cases" you can have valueFactory in the constructor, this is for the rest of the cases.



      Notice that when Invalidate is called, the next thread to enter will run the valueFactory.



      And for the kicks I have done one without monitor (lock). This version uses ManualResetEventSlim instead. This means that it can awake threads all at once instead of one by one. Which would be good for the situation where there are many threads running in parallel. However, it requires extra work to work correctly. For starters, if valueFactory throws, we got to awake the threads, have one of them try to initialize, and have the rest go to wait again… do this until they all fail or one succeeds, meaning that we need to loop. Furthermore, with lock we get reentry handled for us automatically, with ManualResetEventSlim we got to handle that.



      Also, I recommend to have a look at Interlocked.CompareExchange.



      public class Lazy<T>
      {
      private readonly ManualResetEventSlim _event;
      private Thread _ownerThread;
      private int _status;
      private StrongBox<T> _value;

      public Lazy()
      {
      _event = new ManualResetEventSlim(false);
      }

      public T Get(Func<T> valueFactory)
      {
      // _status == 0 :: Not Initialized
      // _status == 1 :: Initializing or Initialized
      // _ownerThread == null :: Not Initialized or Initialized
      // _ownerThread != null :: Initializing
      // _value == null :: Not Initialized or Initializing
      // _value != null :: Initialized
      if (valueFactory == null)
      {
      throw new ArgumentNullException(nameof(valueFactory));
      }
      var ownerThread = Volatile.Read(ref _ownerThread);
      if (ownerThread == Thread.CurrentThread)
      {
      // We have reentry, this is the same thread that is initializing on the value
      // We could:
      // 1. return default(T);
      // 2. throw new LockRecursionException()
      // 3. let it in
      // We are doing that last one
      return Create();
      }
      // Note: It is ok to loop here. Since Threads will only ever change _ownerThread to themselves...
      // there is no chance that _ownerThread suddently changes to the current thread unexpectedly
      // Why we loop? See at the bottom of the loop.
      while (true)
      {
      ownerThread = Volatile.Read(ref _ownerThread);
      if (ownerThread == null)
      {
      // There is no thread initializing currently
      // Either the value has already been initialized or not
      // Check status to tell
      if (Volatile.Read(ref _status) == 1)
      {
      // Value has already been initialized
      var box = Volatile.Read(ref _value);
      if (box == null)
      {
      // Another thread invalidated after we did read _status but before we got Value
      continue;
      }
      return box.Value;
      }
      // Value is yet to be initialized
      // Try to get the right to initialize, Interlocked.CompareExchange only one thread gets in
      var found = Interlocked.CompareExchange(ref _status, 1, 0);
      if (found == 0)
      {
      // We got the right to initialize
      var result = default(T);
      try
      {
      try
      {
      Volatile.Write(ref _ownerThread, Thread.CurrentThread);
      result = Create();
      }
      finally
      {
      // Other threads could be waiting. We need to let them advance.
      _event.Set();
      // We want to make sure _ownerThread is null so another thread can enter
      Volatile.Write(ref _ownerThread, null);
      }
      }
      catch (Exception exception) // Just appeasing the tools telling me I must put an exception here
      {
      GC.KeepAlive(exception); // Just appeasing the tools telling me I am not using the exception, this is a noop.
      // valueFactory did throw an exception
      // We have the option to device a method to cache it and throw it in every calling thread
      // However, I will be reverting to an uninitialized state
      Volatile.Write(ref _status, 0);
      // Reset event so that threads can wait for initialization
      _event.Reset();
      throw;
      // Note: I know this is a weird configuration. Why is this catch not in the inner try?
      // Because I want to make sure I set _status to 0 after the code from finally has run
      // This way I am also sure that another thread cannot enter while finally is running, even if there was an exception
      // In particular, I do not want Invalidate to call _event.Reset before we call _event.Set
      }
      return result;
      }
      // We didn't get the right to initialize
      // Another thread managed to enter first
      }
      // Another thread is initializing the value
      _event.Wait();
      // Perhaps there was an exception during initialization
      // We need to loop, if everything is ok, `ownerThread == null` and `Volatile.Read(ref _status) == 1`
      // Otherwise, we would want a chance to try to initialize
      }

      T Create()
      {
      // calling valueFactory, it could throw
      var created = valueFactory();
      Volatile.Write(ref _value, new StrongBox<T>(created));
      return created;
      }
      }

      public T Get()
      {
      var ownerThread = Volatile.Read(ref _ownerThread);
      if (ownerThread == Thread.CurrentThread)
      {
      // We have reentry, this is the same thread that is initializing on the value
      // We could:
      // 1. return default(T);
      // 2. throw new LockRecursionException()
      // We are doing the last one
      throw new LockRecursionException();
      }
      // Note: It is ok to loop here. Since Threads will only ever change _ownerThread to themselves...
      // there is no chance that _ownerThread suddently changes to the current thread unexpectedly
      // Why we loop? See at the bottom of the loop.
      while (true)
      {
      ownerThread = Volatile.Read(ref _ownerThread);
      if (ownerThread == null)
      {
      // There is no thread initializing currently
      // Either the value has already been initialized or not
      // Check status to tell
      if (Volatile.Read(ref _status) == 1)
      {
      // Value has already been initialized
      var box = Volatile.Read(ref _value);
      if (box == null)
      {
      // Another thread invalidated after we did read _status but before we got Value
      continue;
      }
      return box.Value;
      }
      }
      // Value is yet to be initialized, perhaps another thread is initializing the value
      _event.Wait();
      // Perhaps there was an exception during initialization
      // We need to loop, if everything is ok, `ownerThread == null` and `Volatile.Read(ref _status) == 1`
      // Otherwise, we keep waiting
      }
      }

      public void Invalidate()
      {
      var ownerThread = Volatile.Read(ref _ownerThread);
      if (ownerThread == null)
      {
      // There is no thread initializing currently
      // Either the value has already been initialized or not
      // Try to say it is not initialized
      var found = Interlocked.CompareExchange(ref _status, 0, 1);
      if (found == 1)
      {
      // We did set it to not intialized
      // Now we have the responsability to notify the value is not set
      _event.Reset();
      // And the privilege to destroy the value >:v (because we are using StrongBox<T> this is atomic)
      Volatile.Write(ref _value, null);
      }
      }
      // Either:
      // 1. Another thread is initializing the value. In this case we pretend we got here before that other thread did enter.
      // 2. The value is yet to be initialized. In this case we have nothing to do.
      // 3. Another thread managed to invalidate first. Let us call it a job done.
      // 4. This thread did invalidate. Good job.
      // 5. We have reentry
      }

      public bool TryGet(out T value)
      {
      var ownerThread = Volatile.Read(ref _ownerThread);
      if (ownerThread == null)
      {
      // There is no thread initializing currently
      // Either the value has already been initialized or not
      // Check status to tell
      if (Volatile.Read(ref _status) == 1)
      {
      // Value has already been initialized
      var box = Volatile.Read(ref _value);
      if (box != null)
      {
      value = box.Value;
      return true;
      }
      }
      }
      // Either:
      // 1. Another thread invalidated after we did read _status but before we got Value
      // 2. Value is yet to be initialized
      // 3. Another thread is initializing the value
      // 4. We have reentry
      value = default(T);
      return false;
      }
      }


      As you can see, this version is more complicated. It would be harder to maintain and harder to debug. Yet, it gives a greater level of control. For example, I have managed to give you a Get method that waits for another thread to initialize. I agree, this is a bad API… it would be better if it was Task.



      I did try to implement it using Task<T> instead, however I could not get it right. I believe that it would only make sense with proper exception caching.



      So, let us add exception caching and use Task<T>.



      public class TaskLazy<T>
      {
      private Thread _ownerThread;
      private TaskCompletionSource<T> _source;

      public TaskLazy()
      {
      _source = new TaskCompletionSource<T>();
      }

      public Task<T> Get(Func<T> valueFactory)
      {
      if (valueFactory == null)
      {
      throw new ArgumentNullException(nameof(valueFactory));
      }
      var source = Volatile.Read(ref _source);
      if (!source.Task.IsCompleted)
      {
      var ownerThread = Interlocked.CompareExchange(ref _ownerThread, Thread.CurrentThread, null);
      if (ownerThread == Thread.CurrentThread)
      {
      return Task.Run(valueFactory);
      }
      if (ownerThread == null)
      {
      try
      {
      source.SetResult(valueFactory());
      }
      catch (Exception exception)
      {
      source.SetException(exception);
      throw; // <-- you probably want to throw here, right?
      }
      finally
      {
      Volatile.Write(ref _ownerThread, null);
      }
      }
      }
      return source.Task;
      }

      public Task<T> Get()
      {
      var source = Volatile.Read(ref _source);
      if (!source.Task.IsCompleted)
      {
      var ownerThread = Volatile.Read(ref _ownerThread);
      if (ownerThread == Thread.CurrentThread)
      {
      throw new LockRecursionException();
      }
      }
      return source.Task;
      }

      public void Invalidate()
      {
      if (Volatile.Read(ref _source).Task.IsCompleted)
      {
      Volatile.Write(ref _source, new TaskCompletionSource<T>());
      }
      }
      }


      Ah, beautiful. By removing the idea that if a valueFactory fails another one gets a chance to run, and replacing it with exception caching, we can remove looping, and this allow us to return Task<T>, and you get stuff like Wait and ContinueWith for free.





      Addendum:



      About volatile, have a read of:




      • The C# Memory Model in Theory and Practice, Part 2

      • Threading in C#, Part 4 (Advanced Threading)






      share|improve this answer










      New contributor




      Theraot is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.














      • 1




        Comments are not for extended discussion; this conversation has been moved to the chatroom that was already created for the related discussion on another answer.
        – Vogel612
        22 hours ago















      up vote
      15
      down vote



      accepted







      up vote
      15
      down vote



      accepted






      Having backported Lazy to .NET 2.0, I want to offer a variant...



      First of all, this looks a lot like LazyInitializer. If that works for you, then there is no need to go about creating a Lazy<T> variant.



      With that said, let's continue...





      Before going into the code, let us see some things that may go wrong:




      • The ABA problem

      • Thread visibility/Code reordering

      • Double initialization

      • Value factory recursion

      • Failed initialization

      • Preventing Garbage Collection


      The code on your question will fail on visibility. Other threads will not be aware that you changed _Loaded. In fact, it can fall victim to the value of _Loaded being optimized, so that it is only read once per method call. It will be problematic to have create call Get<T>, it would overwrite _Value... meaning that _Value can change after initialized.



      Also, execution order could be reordered to set _Loaded before calling create.





      The second version:



      volatile is not enough guarantee here. While you have _Loaded volatile, changes to _Value could not be visible. Thus, for another thread, it could reach the object in an state where it claims to be loaded but _Value appears unchanged. In fact, it could appear partially changed (torn read).



      Note: A caveat, given that I am used to target multiple versions of .NET, I consider wrong something that would fail in any of them. And that includes .NET 2.0, .NET 3.0 and .NET 3.5. Which, have broken volatile semantics. This makes me extra careful around volatile.



      We still have to worry about create calling Get.



      I am glad to see T moved to the class level.





      The third version:



      The initialization has been moved to the constructor. I think this defies the purpose, if you can have it in the constructor... why don't you use Lazy<T>?



      The partially changed problem is fixed by using Boxed. You could have used StrongBox<T> here.



      I see you inherited some weirdness from the reference source... The reason they check Boxed that weirdly (also the reason why they do not use StrongBox<T>) is because it might not contain a value, but a cached Exception.



      In your previous version, when create throws, Get throws, and another thread gets a chance to initialize. However, it is possible to initialize Lazy<T> in such way that it stores the exception and throws it for every thread that tries to read it (see LazyThreadSafetyMode).



      Aside from that, the use of Volatile fixed the other visibility and reordering problems.



      Note: When factory throws, m_valueFactory has already been set to the sentinel. Meaning that next thread to call will run the sentinel, resulting in the Value being initialized to default(T). This is probably not what you want.






      • Can it be simplified to a lock?




        No. As you have already guessed.





      • Can the sentinel be replaced by a boolean and a null assignment for the factory?




        Depending on what you want (see note above), yes. The reference source has it the way it has it because of LazyThreadSafetyMode.





      • When you assume the logic is good, can the whole CreateValue() be replaced by an inline new Boxed(factory())?




        No. You could have boxed = new Boxed(m_valueFactory());. However, you are still holding a reference to m_valueFactory, which might prevent garbage collection. Also, in case of recursion, this would be an StackOverflowException. You may set m_valueFactory to null... However, because of the possibility of recursion, that could lead to a NullReferenceException. Thus, we need to set it to a sentinel or set it to null and add a null check. Either way, CreateValue is not a one liner.






      The fourth version:



      Be careful with the writes to _value, they might not be atomic when it is a value type. There is a good reason why the prior version had boxed: another thread could see a partially initialized value (torn reads). This is safe in .NET Core, but not in .NET Framework.



      Note: This code actually works well in modern .NET, in particular in .NET Core.



      I would fix this by using StrongBox<T>. Something similar to firda's answer.



      About firda’s answer… It holds onto valueFactory, keeping that references after initialization keeps alive whatever it points to, preventing the garbage collector from taking it. This is by design. It might not be what you want. One possible path to modify it is to set factory to null at the end of the lock in GetOrCreate (before returning, of course).



      Back to the problem of torn reads, consider this demonstration:



      public struct Long8
      {
      public long D1;
      public long D2;
      public long D3;
      public long D4;
      public long D5;
      public long D6;
      public long D7;
      public long D8;

      public long Val => D1 + D8;
      }

      public class Program
      {
      public static void Main()
      {
      // We are using Lonh8 because writing to it would not be atomic
      // Here DataInit is poor man's lazy
      var dataInit = new DataInit<Long8>();
      // value will hold the values that the reader gets
      var value = default(Long8);
      // we use done to signal the writer to stop
      var done = 0;
      var reader = Task.Run
      (
      () =>
      {
      // We are reading the values set by the writer
      // We expect Val to be 1 because
      // the writer does not write something that would give a different value
      var spinWait = new SpinWait();
      do
      {
      // This loop is here to wait for the writer to set a value since the last call to invalidate
      while (!dataInit.TryGet(out value))
      {
      spinWait.SpinOnce();
      }
      // dataInit.Invalidate();
      // Console.Write(".");
      spinWait.SpinOnce();
      } while (value.Val == 1);
      // Console.WriteLine();
      }
      );
      var writer = Task.Run
      (
      () =>
      {
      // Here we will write { D1 = 1, D8 = 0 } and { D1 = 0, D8 = 1 }
      // In either case Val should be 1
      var spinWait = new SpinWait();
      while (Volatile.Read(ref done) == 0)
      {
      dataInit.Init(new Long8 { D1 = 1, D8 = 0 });
      spinWait.SpinOnce();
      dataInit.Invalidate();
      dataInit.Init(new Long8 { D1 = 0, D8 = 1 });
      spinWait.SpinOnce();
      dataInit.Invalidate();
      }
      }
      );
      reader.Wait();
      System.Diagnostics.Debugger.Break();
      Volatile.Write(ref done, 1);
      writer.Wait();
      Console.WriteLine(value.Val);
      }

      public class DataInit<T>
      {
      private T _data;
      private volatile bool _initialized;

      public void Init(T value)
      {
      // So, we write to a generic field
      // And then to a volatile bool to notify that we did write to the field
      // Yet, when T is a value type larger than the word size of the CPU
      // Another thread may find that _initialized = true but _data is not what we would expect
      _data = value; // Write 1 <----------------
      _initialized = true; // Write 2 (volatile) <------
      }

      public void Invalidate()
      {
      _initialized = false;
      }

      public bool TryGet(out T value)
      {
      // Here we are reading the volatile before we read the other field...
      // Does it make any difference?
      if (_initialized)
      {
      value = _data;
      if (_initialized)
      {
      return true;
      }
      }
      value = default(T);
      return false;
      }
      }
      }


      This code demonstrates a torn read. In either .NET Framework and .NET Core. What is shows is that can read _data in between _initialised = false; _data = value; _initialised = true; resulting is something partially set. To be more specific: The reader reads that the value is initialized and start reading, meanwhile the other thread is modifying the value. Your code does not suffer this particular fate as long as it does not have Invalidate (as firda mentions on the comments). This is something to be careful with.



      This is based Reproduce torn reads of decimal in c# with some modifications introduced by VisualMelon, this is for demonstration purposes only, this is not something you want in production.



      Also consider the cases where valueFactory read Value and where valueFactory throws.






      • I expect that we can change it for object though? saving the seperate class.




        Yes, you can use object instead of LazyHelper.





      • is it allowed to inline some other methods, like ViaFactory?




        Yes, in fact, the compiler may do this optimization.





      • it feels to me a bit strange that Value being called recursively the first time but that might have to do with other code that I stripped - i.e. the Exception caching.




        I am not sure why they went with this design in corefx. However - at least for your code - it is a better design to have ViaFactory and ExecutionAndPublication return than to read the field again.





      • Not sure if it is allowed, but if I even inline more you really get a 'normal' double checked lock with a volatile. The main difference is that the locker object is also being used as a _loaded flag.




        Yeah, sure. The first thread will set _state = null; and then the second thread gets to enter the lock, but you do not want it to call _factory again. So, you check if it is still what you set it to (it isn't, it is null) and that allows to correctly decide to not enter the if.






      Alright, now I get to write one...



      The requirements I will take are:




      • To be thread-safe

      • To allow lazy initialization

      • To allow invalidation

      • To pass the valueFactory on Get


      I will also add a TryGet method, for those threads that want to get the value if it is initialized but do not want to attempt to initialize it.



      First I have made one based on firda's:



      public class MyLazy<T>
      {
      private readonly object _syncroot = new object();
      private StrongBox<T> _value;

      public T Get(Func<T> valueFactory)
      {
      if (valueFactory == null)
      {
      throw new ArgumentNullException(nameof(valueFactory));
      }
      var box = Volatile.Read(ref _value);
      return box != null ? box.Value : GetOrCreate();

      T GetOrCreate()
      {
      lock (_syncroot)
      {
      box = Volatile.Read(ref _value);
      if (box != null)
      {
      return box.Value;
      }
      box = new StrongBox<T>(valueFactory());
      Volatile.Write(ref _value, box);
      return box.Value;
      }
      }
      }

      public void Invalidate()
      {
      Volatile.Write(ref _value, null);
      }

      public bool TryGet(out T value)
      {
      var box = Volatile.Read(ref _value);
      if (box != null)
      {
      value = box.Value;
      return true;
      }
      value = default(T);
      return false;
      }
      }


      Let us be clear, firda's is a good solution. This one mainly differs in that I am not passing valueFactory to the constructor (I am also using StrongBox<T> because there is no reason to not use the provided API). As you mention in the question "95% of the cases" you can have valueFactory in the constructor, this is for the rest of the cases.



      Notice that when Invalidate is called, the next thread to enter will run the valueFactory.



      And for the kicks I have done one without monitor (lock). This version uses ManualResetEventSlim instead. This means that it can awake threads all at once instead of one by one. Which would be good for the situation where there are many threads running in parallel. However, it requires extra work to work correctly. For starters, if valueFactory throws, we got to awake the threads, have one of them try to initialize, and have the rest go to wait again… do this until they all fail or one succeeds, meaning that we need to loop. Furthermore, with lock we get reentry handled for us automatically, with ManualResetEventSlim we got to handle that.



      Also, I recommend to have a look at Interlocked.CompareExchange.



      public class Lazy<T>
      {
      private readonly ManualResetEventSlim _event;
      private Thread _ownerThread;
      private int _status;
      private StrongBox<T> _value;

      public Lazy()
      {
      _event = new ManualResetEventSlim(false);
      }

      public T Get(Func<T> valueFactory)
      {
      // _status == 0 :: Not Initialized
      // _status == 1 :: Initializing or Initialized
      // _ownerThread == null :: Not Initialized or Initialized
      // _ownerThread != null :: Initializing
      // _value == null :: Not Initialized or Initializing
      // _value != null :: Initialized
      if (valueFactory == null)
      {
      throw new ArgumentNullException(nameof(valueFactory));
      }
      var ownerThread = Volatile.Read(ref _ownerThread);
      if (ownerThread == Thread.CurrentThread)
      {
      // We have reentry, this is the same thread that is initializing on the value
      // We could:
      // 1. return default(T);
      // 2. throw new LockRecursionException()
      // 3. let it in
      // We are doing that last one
      return Create();
      }
      // Note: It is ok to loop here. Since Threads will only ever change _ownerThread to themselves...
      // there is no chance that _ownerThread suddently changes to the current thread unexpectedly
      // Why we loop? See at the bottom of the loop.
      while (true)
      {
      ownerThread = Volatile.Read(ref _ownerThread);
      if (ownerThread == null)
      {
      // There is no thread initializing currently
      // Either the value has already been initialized or not
      // Check status to tell
      if (Volatile.Read(ref _status) == 1)
      {
      // Value has already been initialized
      var box = Volatile.Read(ref _value);
      if (box == null)
      {
      // Another thread invalidated after we did read _status but before we got Value
      continue;
      }
      return box.Value;
      }
      // Value is yet to be initialized
      // Try to get the right to initialize, Interlocked.CompareExchange only one thread gets in
      var found = Interlocked.CompareExchange(ref _status, 1, 0);
      if (found == 0)
      {
      // We got the right to initialize
      var result = default(T);
      try
      {
      try
      {
      Volatile.Write(ref _ownerThread, Thread.CurrentThread);
      result = Create();
      }
      finally
      {
      // Other threads could be waiting. We need to let them advance.
      _event.Set();
      // We want to make sure _ownerThread is null so another thread can enter
      Volatile.Write(ref _ownerThread, null);
      }
      }
      catch (Exception exception) // Just appeasing the tools telling me I must put an exception here
      {
      GC.KeepAlive(exception); // Just appeasing the tools telling me I am not using the exception, this is a noop.
      // valueFactory did throw an exception
      // We have the option to device a method to cache it and throw it in every calling thread
      // However, I will be reverting to an uninitialized state
      Volatile.Write(ref _status, 0);
      // Reset event so that threads can wait for initialization
      _event.Reset();
      throw;
      // Note: I know this is a weird configuration. Why is this catch not in the inner try?
      // Because I want to make sure I set _status to 0 after the code from finally has run
      // This way I am also sure that another thread cannot enter while finally is running, even if there was an exception
      // In particular, I do not want Invalidate to call _event.Reset before we call _event.Set
      }
      return result;
      }
      // We didn't get the right to initialize
      // Another thread managed to enter first
      }
      // Another thread is initializing the value
      _event.Wait();
      // Perhaps there was an exception during initialization
      // We need to loop, if everything is ok, `ownerThread == null` and `Volatile.Read(ref _status) == 1`
      // Otherwise, we would want a chance to try to initialize
      }

      T Create()
      {
      // calling valueFactory, it could throw
      var created = valueFactory();
      Volatile.Write(ref _value, new StrongBox<T>(created));
      return created;
      }
      }

      public T Get()
      {
      var ownerThread = Volatile.Read(ref _ownerThread);
      if (ownerThread == Thread.CurrentThread)
      {
      // We have reentry, this is the same thread that is initializing on the value
      // We could:
      // 1. return default(T);
      // 2. throw new LockRecursionException()
      // We are doing the last one
      throw new LockRecursionException();
      }
      // Note: It is ok to loop here. Since Threads will only ever change _ownerThread to themselves...
      // there is no chance that _ownerThread suddently changes to the current thread unexpectedly
      // Why we loop? See at the bottom of the loop.
      while (true)
      {
      ownerThread = Volatile.Read(ref _ownerThread);
      if (ownerThread == null)
      {
      // There is no thread initializing currently
      // Either the value has already been initialized or not
      // Check status to tell
      if (Volatile.Read(ref _status) == 1)
      {
      // Value has already been initialized
      var box = Volatile.Read(ref _value);
      if (box == null)
      {
      // Another thread invalidated after we did read _status but before we got Value
      continue;
      }
      return box.Value;
      }
      }
      // Value is yet to be initialized, perhaps another thread is initializing the value
      _event.Wait();
      // Perhaps there was an exception during initialization
      // We need to loop, if everything is ok, `ownerThread == null` and `Volatile.Read(ref _status) == 1`
      // Otherwise, we keep waiting
      }
      }

      public void Invalidate()
      {
      var ownerThread = Volatile.Read(ref _ownerThread);
      if (ownerThread == null)
      {
      // There is no thread initializing currently
      // Either the value has already been initialized or not
      // Try to say it is not initialized
      var found = Interlocked.CompareExchange(ref _status, 0, 1);
      if (found == 1)
      {
      // We did set it to not intialized
      // Now we have the responsability to notify the value is not set
      _event.Reset();
      // And the privilege to destroy the value >:v (because we are using StrongBox<T> this is atomic)
      Volatile.Write(ref _value, null);
      }
      }
      // Either:
      // 1. Another thread is initializing the value. In this case we pretend we got here before that other thread did enter.
      // 2. The value is yet to be initialized. In this case we have nothing to do.
      // 3. Another thread managed to invalidate first. Let us call it a job done.
      // 4. This thread did invalidate. Good job.
      // 5. We have reentry
      }

      public bool TryGet(out T value)
      {
      var ownerThread = Volatile.Read(ref _ownerThread);
      if (ownerThread == null)
      {
      // There is no thread initializing currently
      // Either the value has already been initialized or not
      // Check status to tell
      if (Volatile.Read(ref _status) == 1)
      {
      // Value has already been initialized
      var box = Volatile.Read(ref _value);
      if (box != null)
      {
      value = box.Value;
      return true;
      }
      }
      }
      // Either:
      // 1. Another thread invalidated after we did read _status but before we got Value
      // 2. Value is yet to be initialized
      // 3. Another thread is initializing the value
      // 4. We have reentry
      value = default(T);
      return false;
      }
      }


      As you can see, this version is more complicated. It would be harder to maintain and harder to debug. Yet, it gives a greater level of control. For example, I have managed to give you a Get method that waits for another thread to initialize. I agree, this is a bad API… it would be better if it was Task.



      I did try to implement it using Task<T> instead, however I could not get it right. I believe that it would only make sense with proper exception caching.



      So, let us add exception caching and use Task<T>.



      public class TaskLazy<T>
      {
      private Thread _ownerThread;
      private TaskCompletionSource<T> _source;

      public TaskLazy()
      {
      _source = new TaskCompletionSource<T>();
      }

      public Task<T> Get(Func<T> valueFactory)
      {
      if (valueFactory == null)
      {
      throw new ArgumentNullException(nameof(valueFactory));
      }
      var source = Volatile.Read(ref _source);
      if (!source.Task.IsCompleted)
      {
      var ownerThread = Interlocked.CompareExchange(ref _ownerThread, Thread.CurrentThread, null);
      if (ownerThread == Thread.CurrentThread)
      {
      return Task.Run(valueFactory);
      }
      if (ownerThread == null)
      {
      try
      {
      source.SetResult(valueFactory());
      }
      catch (Exception exception)
      {
      source.SetException(exception);
      throw; // <-- you probably want to throw here, right?
      }
      finally
      {
      Volatile.Write(ref _ownerThread, null);
      }
      }
      }
      return source.Task;
      }

      public Task<T> Get()
      {
      var source = Volatile.Read(ref _source);
      if (!source.Task.IsCompleted)
      {
      var ownerThread = Volatile.Read(ref _ownerThread);
      if (ownerThread == Thread.CurrentThread)
      {
      throw new LockRecursionException();
      }
      }
      return source.Task;
      }

      public void Invalidate()
      {
      if (Volatile.Read(ref _source).Task.IsCompleted)
      {
      Volatile.Write(ref _source, new TaskCompletionSource<T>());
      }
      }
      }


      Ah, beautiful. By removing the idea that if a valueFactory fails another one gets a chance to run, and replacing it with exception caching, we can remove looping, and this allow us to return Task<T>, and you get stuff like Wait and ContinueWith for free.





      Addendum:



      About volatile, have a read of:




      • The C# Memory Model in Theory and Practice, Part 2

      • Threading in C#, Part 4 (Advanced Threading)






      share|improve this answer










      New contributor




      Theraot is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      Having backported Lazy to .NET 2.0, I want to offer a variant...



      First of all, this looks a lot like LazyInitializer. If that works for you, then there is no need to go about creating a Lazy<T> variant.



      With that said, let's continue...





      Before going into the code, let us see some things that may go wrong:




      • The ABA problem

      • Thread visibility/Code reordering

      • Double initialization

      • Value factory recursion

      • Failed initialization

      • Preventing Garbage Collection


      The code on your question will fail on visibility. Other threads will not be aware that you changed _Loaded. In fact, it can fall victim to the value of _Loaded being optimized, so that it is only read once per method call. It will be problematic to have create call Get<T>, it would overwrite _Value... meaning that _Value can change after initialized.



      Also, execution order could be reordered to set _Loaded before calling create.





      The second version:



      volatile is not enough guarantee here. While you have _Loaded volatile, changes to _Value could not be visible. Thus, for another thread, it could reach the object in an state where it claims to be loaded but _Value appears unchanged. In fact, it could appear partially changed (torn read).



      Note: A caveat, given that I am used to target multiple versions of .NET, I consider wrong something that would fail in any of them. And that includes .NET 2.0, .NET 3.0 and .NET 3.5. Which, have broken volatile semantics. This makes me extra careful around volatile.



      We still have to worry about create calling Get.



      I am glad to see T moved to the class level.





      The third version:



      The initialization has been moved to the constructor. I think this defies the purpose, if you can have it in the constructor... why don't you use Lazy<T>?



      The partially changed problem is fixed by using Boxed. You could have used StrongBox<T> here.



      I see you inherited some weirdness from the reference source... The reason they check Boxed that weirdly (also the reason why they do not use StrongBox<T>) is because it might not contain a value, but a cached Exception.



      In your previous version, when create throws, Get throws, and another thread gets a chance to initialize. However, it is possible to initialize Lazy<T> in such way that it stores the exception and throws it for every thread that tries to read it (see LazyThreadSafetyMode).



      Aside from that, the use of Volatile fixed the other visibility and reordering problems.



      Note: When factory throws, m_valueFactory has already been set to the sentinel. Meaning that next thread to call will run the sentinel, resulting in the Value being initialized to default(T). This is probably not what you want.






      • Can it be simplified to a lock?




        No. As you have already guessed.





      • Can the sentinel be replaced by a boolean and a null assignment for the factory?




        Depending on what you want (see note above), yes. The reference source has it the way it has it because of LazyThreadSafetyMode.





      • When you assume the logic is good, can the whole CreateValue() be replaced by an inline new Boxed(factory())?




        No. You could have boxed = new Boxed(m_valueFactory());. However, you are still holding a reference to m_valueFactory, which might prevent garbage collection. Also, in case of recursion, this would be an StackOverflowException. You may set m_valueFactory to null... However, because of the possibility of recursion, that could lead to a NullReferenceException. Thus, we need to set it to a sentinel or set it to null and add a null check. Either way, CreateValue is not a one liner.






      The fourth version:



      Be careful with the writes to _value, they might not be atomic when it is a value type. There is a good reason why the prior version had boxed: another thread could see a partially initialized value (torn reads). This is safe in .NET Core, but not in .NET Framework.



      Note: This code actually works well in modern .NET, in particular in .NET Core.



      I would fix this by using StrongBox<T>. Something similar to firda's answer.



      About firda’s answer… It holds onto valueFactory, keeping that references after initialization keeps alive whatever it points to, preventing the garbage collector from taking it. This is by design. It might not be what you want. One possible path to modify it is to set factory to null at the end of the lock in GetOrCreate (before returning, of course).



      Back to the problem of torn reads, consider this demonstration:



      public struct Long8
      {
      public long D1;
      public long D2;
      public long D3;
      public long D4;
      public long D5;
      public long D6;
      public long D7;
      public long D8;

      public long Val => D1 + D8;
      }

      public class Program
      {
      public static void Main()
      {
      // We are using Lonh8 because writing to it would not be atomic
      // Here DataInit is poor man's lazy
      var dataInit = new DataInit<Long8>();
      // value will hold the values that the reader gets
      var value = default(Long8);
      // we use done to signal the writer to stop
      var done = 0;
      var reader = Task.Run
      (
      () =>
      {
      // We are reading the values set by the writer
      // We expect Val to be 1 because
      // the writer does not write something that would give a different value
      var spinWait = new SpinWait();
      do
      {
      // This loop is here to wait for the writer to set a value since the last call to invalidate
      while (!dataInit.TryGet(out value))
      {
      spinWait.SpinOnce();
      }
      // dataInit.Invalidate();
      // Console.Write(".");
      spinWait.SpinOnce();
      } while (value.Val == 1);
      // Console.WriteLine();
      }
      );
      var writer = Task.Run
      (
      () =>
      {
      // Here we will write { D1 = 1, D8 = 0 } and { D1 = 0, D8 = 1 }
      // In either case Val should be 1
      var spinWait = new SpinWait();
      while (Volatile.Read(ref done) == 0)
      {
      dataInit.Init(new Long8 { D1 = 1, D8 = 0 });
      spinWait.SpinOnce();
      dataInit.Invalidate();
      dataInit.Init(new Long8 { D1 = 0, D8 = 1 });
      spinWait.SpinOnce();
      dataInit.Invalidate();
      }
      }
      );
      reader.Wait();
      System.Diagnostics.Debugger.Break();
      Volatile.Write(ref done, 1);
      writer.Wait();
      Console.WriteLine(value.Val);
      }

      public class DataInit<T>
      {
      private T _data;
      private volatile bool _initialized;

      public void Init(T value)
      {
      // So, we write to a generic field
      // And then to a volatile bool to notify that we did write to the field
      // Yet, when T is a value type larger than the word size of the CPU
      // Another thread may find that _initialized = true but _data is not what we would expect
      _data = value; // Write 1 <----------------
      _initialized = true; // Write 2 (volatile) <------
      }

      public void Invalidate()
      {
      _initialized = false;
      }

      public bool TryGet(out T value)
      {
      // Here we are reading the volatile before we read the other field...
      // Does it make any difference?
      if (_initialized)
      {
      value = _data;
      if (_initialized)
      {
      return true;
      }
      }
      value = default(T);
      return false;
      }
      }
      }


      This code demonstrates a torn read. In either .NET Framework and .NET Core. What is shows is that can read _data in between _initialised = false; _data = value; _initialised = true; resulting is something partially set. To be more specific: The reader reads that the value is initialized and start reading, meanwhile the other thread is modifying the value. Your code does not suffer this particular fate as long as it does not have Invalidate (as firda mentions on the comments). This is something to be careful with.



      This is based Reproduce torn reads of decimal in c# with some modifications introduced by VisualMelon, this is for demonstration purposes only, this is not something you want in production.



      Also consider the cases where valueFactory read Value and where valueFactory throws.






      • I expect that we can change it for object though? saving the seperate class.




        Yes, you can use object instead of LazyHelper.





      • is it allowed to inline some other methods, like ViaFactory?




        Yes, in fact, the compiler may do this optimization.





      • it feels to me a bit strange that Value being called recursively the first time but that might have to do with other code that I stripped - i.e. the Exception caching.




        I am not sure why they went with this design in corefx. However - at least for your code - it is a better design to have ViaFactory and ExecutionAndPublication return than to read the field again.





      • Not sure if it is allowed, but if I even inline more you really get a 'normal' double checked lock with a volatile. The main difference is that the locker object is also being used as a _loaded flag.




        Yeah, sure. The first thread will set _state = null; and then the second thread gets to enter the lock, but you do not want it to call _factory again. So, you check if it is still what you set it to (it isn't, it is null) and that allows to correctly decide to not enter the if.






      Alright, now I get to write one...



      The requirements I will take are:




      • To be thread-safe

      • To allow lazy initialization

      • To allow invalidation

      • To pass the valueFactory on Get


      I will also add a TryGet method, for those threads that want to get the value if it is initialized but do not want to attempt to initialize it.



      First I have made one based on firda's:



      public class MyLazy<T>
      {
      private readonly object _syncroot = new object();
      private StrongBox<T> _value;

      public T Get(Func<T> valueFactory)
      {
      if (valueFactory == null)
      {
      throw new ArgumentNullException(nameof(valueFactory));
      }
      var box = Volatile.Read(ref _value);
      return box != null ? box.Value : GetOrCreate();

      T GetOrCreate()
      {
      lock (_syncroot)
      {
      box = Volatile.Read(ref _value);
      if (box != null)
      {
      return box.Value;
      }
      box = new StrongBox<T>(valueFactory());
      Volatile.Write(ref _value, box);
      return box.Value;
      }
      }
      }

      public void Invalidate()
      {
      Volatile.Write(ref _value, null);
      }

      public bool TryGet(out T value)
      {
      var box = Volatile.Read(ref _value);
      if (box != null)
      {
      value = box.Value;
      return true;
      }
      value = default(T);
      return false;
      }
      }


      Let us be clear, firda's is a good solution. This one mainly differs in that I am not passing valueFactory to the constructor (I am also using StrongBox<T> because there is no reason to not use the provided API). As you mention in the question "95% of the cases" you can have valueFactory in the constructor, this is for the rest of the cases.



      Notice that when Invalidate is called, the next thread to enter will run the valueFactory.



      And for the kicks I have done one without monitor (lock). This version uses ManualResetEventSlim instead. This means that it can awake threads all at once instead of one by one. Which would be good for the situation where there are many threads running in parallel. However, it requires extra work to work correctly. For starters, if valueFactory throws, we got to awake the threads, have one of them try to initialize, and have the rest go to wait again… do this until they all fail or one succeeds, meaning that we need to loop. Furthermore, with lock we get reentry handled for us automatically, with ManualResetEventSlim we got to handle that.



      Also, I recommend to have a look at Interlocked.CompareExchange.



      public class Lazy<T>
      {
      private readonly ManualResetEventSlim _event;
      private Thread _ownerThread;
      private int _status;
      private StrongBox<T> _value;

      public Lazy()
      {
      _event = new ManualResetEventSlim(false);
      }

      public T Get(Func<T> valueFactory)
      {
      // _status == 0 :: Not Initialized
      // _status == 1 :: Initializing or Initialized
      // _ownerThread == null :: Not Initialized or Initialized
      // _ownerThread != null :: Initializing
      // _value == null :: Not Initialized or Initializing
      // _value != null :: Initialized
      if (valueFactory == null)
      {
      throw new ArgumentNullException(nameof(valueFactory));
      }
      var ownerThread = Volatile.Read(ref _ownerThread);
      if (ownerThread == Thread.CurrentThread)
      {
      // We have reentry, this is the same thread that is initializing on the value
      // We could:
      // 1. return default(T);
      // 2. throw new LockRecursionException()
      // 3. let it in
      // We are doing that last one
      return Create();
      }
      // Note: It is ok to loop here. Since Threads will only ever change _ownerThread to themselves...
      // there is no chance that _ownerThread suddently changes to the current thread unexpectedly
      // Why we loop? See at the bottom of the loop.
      while (true)
      {
      ownerThread = Volatile.Read(ref _ownerThread);
      if (ownerThread == null)
      {
      // There is no thread initializing currently
      // Either the value has already been initialized or not
      // Check status to tell
      if (Volatile.Read(ref _status) == 1)
      {
      // Value has already been initialized
      var box = Volatile.Read(ref _value);
      if (box == null)
      {
      // Another thread invalidated after we did read _status but before we got Value
      continue;
      }
      return box.Value;
      }
      // Value is yet to be initialized
      // Try to get the right to initialize, Interlocked.CompareExchange only one thread gets in
      var found = Interlocked.CompareExchange(ref _status, 1, 0);
      if (found == 0)
      {
      // We got the right to initialize
      var result = default(T);
      try
      {
      try
      {
      Volatile.Write(ref _ownerThread, Thread.CurrentThread);
      result = Create();
      }
      finally
      {
      // Other threads could be waiting. We need to let them advance.
      _event.Set();
      // We want to make sure _ownerThread is null so another thread can enter
      Volatile.Write(ref _ownerThread, null);
      }
      }
      catch (Exception exception) // Just appeasing the tools telling me I must put an exception here
      {
      GC.KeepAlive(exception); // Just appeasing the tools telling me I am not using the exception, this is a noop.
      // valueFactory did throw an exception
      // We have the option to device a method to cache it and throw it in every calling thread
      // However, I will be reverting to an uninitialized state
      Volatile.Write(ref _status, 0);
      // Reset event so that threads can wait for initialization
      _event.Reset();
      throw;
      // Note: I know this is a weird configuration. Why is this catch not in the inner try?
      // Because I want to make sure I set _status to 0 after the code from finally has run
      // This way I am also sure that another thread cannot enter while finally is running, even if there was an exception
      // In particular, I do not want Invalidate to call _event.Reset before we call _event.Set
      }
      return result;
      }
      // We didn't get the right to initialize
      // Another thread managed to enter first
      }
      // Another thread is initializing the value
      _event.Wait();
      // Perhaps there was an exception during initialization
      // We need to loop, if everything is ok, `ownerThread == null` and `Volatile.Read(ref _status) == 1`
      // Otherwise, we would want a chance to try to initialize
      }

      T Create()
      {
      // calling valueFactory, it could throw
      var created = valueFactory();
      Volatile.Write(ref _value, new StrongBox<T>(created));
      return created;
      }
      }

      public T Get()
      {
      var ownerThread = Volatile.Read(ref _ownerThread);
      if (ownerThread == Thread.CurrentThread)
      {
      // We have reentry, this is the same thread that is initializing on the value
      // We could:
      // 1. return default(T);
      // 2. throw new LockRecursionException()
      // We are doing the last one
      throw new LockRecursionException();
      }
      // Note: It is ok to loop here. Since Threads will only ever change _ownerThread to themselves...
      // there is no chance that _ownerThread suddently changes to the current thread unexpectedly
      // Why we loop? See at the bottom of the loop.
      while (true)
      {
      ownerThread = Volatile.Read(ref _ownerThread);
      if (ownerThread == null)
      {
      // There is no thread initializing currently
      // Either the value has already been initialized or not
      // Check status to tell
      if (Volatile.Read(ref _status) == 1)
      {
      // Value has already been initialized
      var box = Volatile.Read(ref _value);
      if (box == null)
      {
      // Another thread invalidated after we did read _status but before we got Value
      continue;
      }
      return box.Value;
      }
      }
      // Value is yet to be initialized, perhaps another thread is initializing the value
      _event.Wait();
      // Perhaps there was an exception during initialization
      // We need to loop, if everything is ok, `ownerThread == null` and `Volatile.Read(ref _status) == 1`
      // Otherwise, we keep waiting
      }
      }

      public void Invalidate()
      {
      var ownerThread = Volatile.Read(ref _ownerThread);
      if (ownerThread == null)
      {
      // There is no thread initializing currently
      // Either the value has already been initialized or not
      // Try to say it is not initialized
      var found = Interlocked.CompareExchange(ref _status, 0, 1);
      if (found == 1)
      {
      // We did set it to not intialized
      // Now we have the responsability to notify the value is not set
      _event.Reset();
      // And the privilege to destroy the value >:v (because we are using StrongBox<T> this is atomic)
      Volatile.Write(ref _value, null);
      }
      }
      // Either:
      // 1. Another thread is initializing the value. In this case we pretend we got here before that other thread did enter.
      // 2. The value is yet to be initialized. In this case we have nothing to do.
      // 3. Another thread managed to invalidate first. Let us call it a job done.
      // 4. This thread did invalidate. Good job.
      // 5. We have reentry
      }

      public bool TryGet(out T value)
      {
      var ownerThread = Volatile.Read(ref _ownerThread);
      if (ownerThread == null)
      {
      // There is no thread initializing currently
      // Either the value has already been initialized or not
      // Check status to tell
      if (Volatile.Read(ref _status) == 1)
      {
      // Value has already been initialized
      var box = Volatile.Read(ref _value);
      if (box != null)
      {
      value = box.Value;
      return true;
      }
      }
      }
      // Either:
      // 1. Another thread invalidated after we did read _status but before we got Value
      // 2. Value is yet to be initialized
      // 3. Another thread is initializing the value
      // 4. We have reentry
      value = default(T);
      return false;
      }
      }


      As you can see, this version is more complicated. It would be harder to maintain and harder to debug. Yet, it gives a greater level of control. For example, I have managed to give you a Get method that waits for another thread to initialize. I agree, this is a bad API… it would be better if it was Task.



      I did try to implement it using Task<T> instead, however I could not get it right. I believe that it would only make sense with proper exception caching.



      So, let us add exception caching and use Task<T>.



      public class TaskLazy<T>
      {
      private Thread _ownerThread;
      private TaskCompletionSource<T> _source;

      public TaskLazy()
      {
      _source = new TaskCompletionSource<T>();
      }

      public Task<T> Get(Func<T> valueFactory)
      {
      if (valueFactory == null)
      {
      throw new ArgumentNullException(nameof(valueFactory));
      }
      var source = Volatile.Read(ref _source);
      if (!source.Task.IsCompleted)
      {
      var ownerThread = Interlocked.CompareExchange(ref _ownerThread, Thread.CurrentThread, null);
      if (ownerThread == Thread.CurrentThread)
      {
      return Task.Run(valueFactory);
      }
      if (ownerThread == null)
      {
      try
      {
      source.SetResult(valueFactory());
      }
      catch (Exception exception)
      {
      source.SetException(exception);
      throw; // <-- you probably want to throw here, right?
      }
      finally
      {
      Volatile.Write(ref _ownerThread, null);
      }
      }
      }
      return source.Task;
      }

      public Task<T> Get()
      {
      var source = Volatile.Read(ref _source);
      if (!source.Task.IsCompleted)
      {
      var ownerThread = Volatile.Read(ref _ownerThread);
      if (ownerThread == Thread.CurrentThread)
      {
      throw new LockRecursionException();
      }
      }
      return source.Task;
      }

      public void Invalidate()
      {
      if (Volatile.Read(ref _source).Task.IsCompleted)
      {
      Volatile.Write(ref _source, new TaskCompletionSource<T>());
      }
      }
      }


      Ah, beautiful. By removing the idea that if a valueFactory fails another one gets a chance to run, and replacing it with exception caching, we can remove looping, and this allow us to return Task<T>, and you get stuff like Wait and ContinueWith for free.





      Addendum:



      About volatile, have a read of:




      • The C# Memory Model in Theory and Practice, Part 2

      • Threading in C#, Part 4 (Advanced Threading)







      share|improve this answer










      New contributor




      Theraot is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|improve this answer



      share|improve this answer








      edited 19 hours ago





















      New contributor




      Theraot is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      answered yesterday









      Theraot

      27415




      27415




      New contributor




      Theraot is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      Theraot is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      Theraot is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.








      • 1




        Comments are not for extended discussion; this conversation has been moved to the chatroom that was already created for the related discussion on another answer.
        – Vogel612
        22 hours ago
















      • 1




        Comments are not for extended discussion; this conversation has been moved to the chatroom that was already created for the related discussion on another answer.
        – Vogel612
        22 hours ago










      1




      1




      Comments are not for extended discussion; this conversation has been moved to the chatroom that was already created for the related discussion on another answer.
      – Vogel612
      22 hours ago






      Comments are not for extended discussion; this conversation has been moved to the chatroom that was already created for the related discussion on another answer.
      – Vogel612
      22 hours ago














      up vote
      38
      down vote














      is the current implementation 'safe'?




      Absolutely not. The fact that you had to ask this question indicates that you do not understand enough about threading to build your own mechanisms like this. You need to have a deep and thorough understanding of the memory model to build these mechanisms. That is why you should always rely on the mechanisms provided for you in the framework, that were written by experts..



      Why is it unsafe? Consider the following scenario. We have two threads, A and B. _Value is null and _Loaded is false.




      • We're on thread A.

      • The memory location of _Value is loaded into the processor cache for the CPU that thread A is affinitized to. It is null.

      • We switch to thread B.

      • Thread B reads _Loaded as false, takes the lock, checks _Loaded again, calls create, assigns _Value and _Loaded and leaves the lock.

      • We switch back to thread A.


      • _Loaded is now true, so thread A returns _Value from the processor cache, which is null.


      Thread A is not required to invalidate the cache because thread A never takes a lock.!



      Now, I made an argument here from processor caches. This is the wrong argument to make in general. Rather, what you must do when trying to build a new threading mechanism like this is to not reason about any specific processor architecture, but rather to reason about the abstract memory model of the C# language. C# permits reads and writes to move forwards and backwards in time in multithreaded programs. Any time travel that is not explicitly forbidden by the C# specification must be considered to be possible. Your task is to then write code that is correct for any possible combination of movements of reads and writes in time regardless of whether they are really possible on a specific processor or not.



      Note that in particular the C# specification does not require that all threads observe a consistent set of write and read re-orderings. It is perfectly legal and possible for two threads to disagree on how a read was re-ordered with respect to a write.



      If writing correct programs in a world where all reads and writes can be moved around in time sounds hard, that's because it is. I am not competent to do this work, and I do not attempt to. I leave it to experts.




      are there any important performance considerations vs the original one?




      Only you can answer that question. Answer performance questions by gathering real-world empirical data.



      However, I can say a few things about this problem in general.



      The first is: double-checked locking is intended to avoid the cost of the lock. Let's examine the assumptions underlying that intention. The assumption is that the cost of taking the lock is too high on the uncontended path. Is that assumption warranted? What is the cost of taking an uncontended lock? Did you measure it? Did you compare it against the cost of the lock-avoiding check? (Since the lock-avoiding check code is wrong, testing it for performance is not actually meaningful since we can always write faster code if we don't care about correctness, but still, we need to know whether this intervention is an improvement.) And most importantly, is the cost of taking an uncontended lock relevant to the consumer of this code? Because they are the stakeholder whose opinions are relevant; what do they say about the cost of an uncontended lock?



      Let's suppose that the cost of an uncontended lock is relevant. Then surely the cost of a contended lock is enormously relevant. You've built a mechanism that potentially contends a lot of threads! What are the alternatives that you considered here? For example, you could avoid the lock altogether by deciding that it is OK for the create function to be called on multiple threads -- perhaps we know that it is cheap and idempotent. Those threads can then race to their heart's content to initialize the field, and we can use an interlocked exchange to ensure that we get a consistent value. That avoids the cost of the lock altogether, but it creates a different kind of cost, and puts a requirement on the caller to pass an idempotent creator.



      Let's consider other aspects of your solution with respect to performance. You allocate the lock object regardless of whether you ever take the lock, and you keep it forever. What's the burden on the garbage collector? What is the impact on collection pressure? These things are all deeply relevant to performance. Again, remember, the assumption here is that we are so worried about the couple of nanoseconds it takes to enter and leave an uncontended lock that we're willing to write a double checked lock. If those nanoseconds are relevant then surely the milliseconds it takes to do an extra collection are incredibly relevant!




      is there anything else I'm missing that I should be concerned about in a high traffic application?




      I don't know how to answer that question.






      share|improve this answer



















      • 11




        mhmm... this sounds sane but I'm still not sure how to apply this knowledge in a real-life scenario or simply to this question - maybe you could add some practical advice? Currently it's more like a rant on the OP for not knowing what they're doing ;-( I don't find this answer very helpful... sorry. It's not the same level as your other posts or blogs.
        – t3chb0t
        Nov 15 at 19:18








      • 8




        @t3chb0t: As for practical advice, I gave the best practical advice I know. Do not attempt to roll your own low-level mechanisms; use mechanisms that were built by experts.
        – Eric Lippert
        Nov 15 at 19:22






      • 6




        @EricLippert You can take code quality seriously and still frame your critique in such a way that doesn't make Dirk feel "burned". The first paragraph is especially condescending in my opinion.
        – CaTs
        Nov 16 at 6:33






      • 14




        @CaTs: That said, I note that you are responding to tone rather than to the factual content. That Dirk and I both lack the knowledge and ability to do this work correctly is a fact. That the best course of action is to use tools correctly that were built by people who are competent is also a fact. If stating those facts emphatically makes people uncomfortable, well, that's a price I'm willing to pay in order to clearly and unequivocally state that rolling your own threading mechanisms is incredibly dangerous and easy to get wrong.
        – Eric Lippert
        Nov 16 at 6:51






      • 7




        The purpose of StackOverflow is to create searchable artifacts - a library of questions and answers that may be useful to future programmers, and hopefully also useful to the original asker. If the CodeReview stack has the same goals, then this is an excellent answer. Considering that this is Eric Lippert (who served on the C# design team), adding a code snippet or other suggestion would have sent the wrong message to future programmers: "If I include this snippet / follow this pattern, then my homebrew thread management solution will be safe and work well - Eric Lippert said so."
        – benj2240
        Nov 16 at 15:22















      up vote
      38
      down vote














      is the current implementation 'safe'?




      Absolutely not. The fact that you had to ask this question indicates that you do not understand enough about threading to build your own mechanisms like this. You need to have a deep and thorough understanding of the memory model to build these mechanisms. That is why you should always rely on the mechanisms provided for you in the framework, that were written by experts..



      Why is it unsafe? Consider the following scenario. We have two threads, A and B. _Value is null and _Loaded is false.




      • We're on thread A.

      • The memory location of _Value is loaded into the processor cache for the CPU that thread A is affinitized to. It is null.

      • We switch to thread B.

      • Thread B reads _Loaded as false, takes the lock, checks _Loaded again, calls create, assigns _Value and _Loaded and leaves the lock.

      • We switch back to thread A.


      • _Loaded is now true, so thread A returns _Value from the processor cache, which is null.


      Thread A is not required to invalidate the cache because thread A never takes a lock.!



      Now, I made an argument here from processor caches. This is the wrong argument to make in general. Rather, what you must do when trying to build a new threading mechanism like this is to not reason about any specific processor architecture, but rather to reason about the abstract memory model of the C# language. C# permits reads and writes to move forwards and backwards in time in multithreaded programs. Any time travel that is not explicitly forbidden by the C# specification must be considered to be possible. Your task is to then write code that is correct for any possible combination of movements of reads and writes in time regardless of whether they are really possible on a specific processor or not.



      Note that in particular the C# specification does not require that all threads observe a consistent set of write and read re-orderings. It is perfectly legal and possible for two threads to disagree on how a read was re-ordered with respect to a write.



      If writing correct programs in a world where all reads and writes can be moved around in time sounds hard, that's because it is. I am not competent to do this work, and I do not attempt to. I leave it to experts.




      are there any important performance considerations vs the original one?




      Only you can answer that question. Answer performance questions by gathering real-world empirical data.



      However, I can say a few things about this problem in general.



      The first is: double-checked locking is intended to avoid the cost of the lock. Let's examine the assumptions underlying that intention. The assumption is that the cost of taking the lock is too high on the uncontended path. Is that assumption warranted? What is the cost of taking an uncontended lock? Did you measure it? Did you compare it against the cost of the lock-avoiding check? (Since the lock-avoiding check code is wrong, testing it for performance is not actually meaningful since we can always write faster code if we don't care about correctness, but still, we need to know whether this intervention is an improvement.) And most importantly, is the cost of taking an uncontended lock relevant to the consumer of this code? Because they are the stakeholder whose opinions are relevant; what do they say about the cost of an uncontended lock?



      Let's suppose that the cost of an uncontended lock is relevant. Then surely the cost of a contended lock is enormously relevant. You've built a mechanism that potentially contends a lot of threads! What are the alternatives that you considered here? For example, you could avoid the lock altogether by deciding that it is OK for the create function to be called on multiple threads -- perhaps we know that it is cheap and idempotent. Those threads can then race to their heart's content to initialize the field, and we can use an interlocked exchange to ensure that we get a consistent value. That avoids the cost of the lock altogether, but it creates a different kind of cost, and puts a requirement on the caller to pass an idempotent creator.



      Let's consider other aspects of your solution with respect to performance. You allocate the lock object regardless of whether you ever take the lock, and you keep it forever. What's the burden on the garbage collector? What is the impact on collection pressure? These things are all deeply relevant to performance. Again, remember, the assumption here is that we are so worried about the couple of nanoseconds it takes to enter and leave an uncontended lock that we're willing to write a double checked lock. If those nanoseconds are relevant then surely the milliseconds it takes to do an extra collection are incredibly relevant!




      is there anything else I'm missing that I should be concerned about in a high traffic application?




      I don't know how to answer that question.






      share|improve this answer



















      • 11




        mhmm... this sounds sane but I'm still not sure how to apply this knowledge in a real-life scenario or simply to this question - maybe you could add some practical advice? Currently it's more like a rant on the OP for not knowing what they're doing ;-( I don't find this answer very helpful... sorry. It's not the same level as your other posts or blogs.
        – t3chb0t
        Nov 15 at 19:18








      • 8




        @t3chb0t: As for practical advice, I gave the best practical advice I know. Do not attempt to roll your own low-level mechanisms; use mechanisms that were built by experts.
        – Eric Lippert
        Nov 15 at 19:22






      • 6




        @EricLippert You can take code quality seriously and still frame your critique in such a way that doesn't make Dirk feel "burned". The first paragraph is especially condescending in my opinion.
        – CaTs
        Nov 16 at 6:33






      • 14




        @CaTs: That said, I note that you are responding to tone rather than to the factual content. That Dirk and I both lack the knowledge and ability to do this work correctly is a fact. That the best course of action is to use tools correctly that were built by people who are competent is also a fact. If stating those facts emphatically makes people uncomfortable, well, that's a price I'm willing to pay in order to clearly and unequivocally state that rolling your own threading mechanisms is incredibly dangerous and easy to get wrong.
        – Eric Lippert
        Nov 16 at 6:51






      • 7




        The purpose of StackOverflow is to create searchable artifacts - a library of questions and answers that may be useful to future programmers, and hopefully also useful to the original asker. If the CodeReview stack has the same goals, then this is an excellent answer. Considering that this is Eric Lippert (who served on the C# design team), adding a code snippet or other suggestion would have sent the wrong message to future programmers: "If I include this snippet / follow this pattern, then my homebrew thread management solution will be safe and work well - Eric Lippert said so."
        – benj2240
        Nov 16 at 15:22













      up vote
      38
      down vote










      up vote
      38
      down vote










      is the current implementation 'safe'?




      Absolutely not. The fact that you had to ask this question indicates that you do not understand enough about threading to build your own mechanisms like this. You need to have a deep and thorough understanding of the memory model to build these mechanisms. That is why you should always rely on the mechanisms provided for you in the framework, that were written by experts..



      Why is it unsafe? Consider the following scenario. We have two threads, A and B. _Value is null and _Loaded is false.




      • We're on thread A.

      • The memory location of _Value is loaded into the processor cache for the CPU that thread A is affinitized to. It is null.

      • We switch to thread B.

      • Thread B reads _Loaded as false, takes the lock, checks _Loaded again, calls create, assigns _Value and _Loaded and leaves the lock.

      • We switch back to thread A.


      • _Loaded is now true, so thread A returns _Value from the processor cache, which is null.


      Thread A is not required to invalidate the cache because thread A never takes a lock.!



      Now, I made an argument here from processor caches. This is the wrong argument to make in general. Rather, what you must do when trying to build a new threading mechanism like this is to not reason about any specific processor architecture, but rather to reason about the abstract memory model of the C# language. C# permits reads and writes to move forwards and backwards in time in multithreaded programs. Any time travel that is not explicitly forbidden by the C# specification must be considered to be possible. Your task is to then write code that is correct for any possible combination of movements of reads and writes in time regardless of whether they are really possible on a specific processor or not.



      Note that in particular the C# specification does not require that all threads observe a consistent set of write and read re-orderings. It is perfectly legal and possible for two threads to disagree on how a read was re-ordered with respect to a write.



      If writing correct programs in a world where all reads and writes can be moved around in time sounds hard, that's because it is. I am not competent to do this work, and I do not attempt to. I leave it to experts.




      are there any important performance considerations vs the original one?




      Only you can answer that question. Answer performance questions by gathering real-world empirical data.



      However, I can say a few things about this problem in general.



      The first is: double-checked locking is intended to avoid the cost of the lock. Let's examine the assumptions underlying that intention. The assumption is that the cost of taking the lock is too high on the uncontended path. Is that assumption warranted? What is the cost of taking an uncontended lock? Did you measure it? Did you compare it against the cost of the lock-avoiding check? (Since the lock-avoiding check code is wrong, testing it for performance is not actually meaningful since we can always write faster code if we don't care about correctness, but still, we need to know whether this intervention is an improvement.) And most importantly, is the cost of taking an uncontended lock relevant to the consumer of this code? Because they are the stakeholder whose opinions are relevant; what do they say about the cost of an uncontended lock?



      Let's suppose that the cost of an uncontended lock is relevant. Then surely the cost of a contended lock is enormously relevant. You've built a mechanism that potentially contends a lot of threads! What are the alternatives that you considered here? For example, you could avoid the lock altogether by deciding that it is OK for the create function to be called on multiple threads -- perhaps we know that it is cheap and idempotent. Those threads can then race to their heart's content to initialize the field, and we can use an interlocked exchange to ensure that we get a consistent value. That avoids the cost of the lock altogether, but it creates a different kind of cost, and puts a requirement on the caller to pass an idempotent creator.



      Let's consider other aspects of your solution with respect to performance. You allocate the lock object regardless of whether you ever take the lock, and you keep it forever. What's the burden on the garbage collector? What is the impact on collection pressure? These things are all deeply relevant to performance. Again, remember, the assumption here is that we are so worried about the couple of nanoseconds it takes to enter and leave an uncontended lock that we're willing to write a double checked lock. If those nanoseconds are relevant then surely the milliseconds it takes to do an extra collection are incredibly relevant!




      is there anything else I'm missing that I should be concerned about in a high traffic application?




      I don't know how to answer that question.






      share|improve this answer















      is the current implementation 'safe'?




      Absolutely not. The fact that you had to ask this question indicates that you do not understand enough about threading to build your own mechanisms like this. You need to have a deep and thorough understanding of the memory model to build these mechanisms. That is why you should always rely on the mechanisms provided for you in the framework, that were written by experts..



      Why is it unsafe? Consider the following scenario. We have two threads, A and B. _Value is null and _Loaded is false.




      • We're on thread A.

      • The memory location of _Value is loaded into the processor cache for the CPU that thread A is affinitized to. It is null.

      • We switch to thread B.

      • Thread B reads _Loaded as false, takes the lock, checks _Loaded again, calls create, assigns _Value and _Loaded and leaves the lock.

      • We switch back to thread A.


      • _Loaded is now true, so thread A returns _Value from the processor cache, which is null.


      Thread A is not required to invalidate the cache because thread A never takes a lock.!



      Now, I made an argument here from processor caches. This is the wrong argument to make in general. Rather, what you must do when trying to build a new threading mechanism like this is to not reason about any specific processor architecture, but rather to reason about the abstract memory model of the C# language. C# permits reads and writes to move forwards and backwards in time in multithreaded programs. Any time travel that is not explicitly forbidden by the C# specification must be considered to be possible. Your task is to then write code that is correct for any possible combination of movements of reads and writes in time regardless of whether they are really possible on a specific processor or not.



      Note that in particular the C# specification does not require that all threads observe a consistent set of write and read re-orderings. It is perfectly legal and possible for two threads to disagree on how a read was re-ordered with respect to a write.



      If writing correct programs in a world where all reads and writes can be moved around in time sounds hard, that's because it is. I am not competent to do this work, and I do not attempt to. I leave it to experts.




      are there any important performance considerations vs the original one?




      Only you can answer that question. Answer performance questions by gathering real-world empirical data.



      However, I can say a few things about this problem in general.



      The first is: double-checked locking is intended to avoid the cost of the lock. Let's examine the assumptions underlying that intention. The assumption is that the cost of taking the lock is too high on the uncontended path. Is that assumption warranted? What is the cost of taking an uncontended lock? Did you measure it? Did you compare it against the cost of the lock-avoiding check? (Since the lock-avoiding check code is wrong, testing it for performance is not actually meaningful since we can always write faster code if we don't care about correctness, but still, we need to know whether this intervention is an improvement.) And most importantly, is the cost of taking an uncontended lock relevant to the consumer of this code? Because they are the stakeholder whose opinions are relevant; what do they say about the cost of an uncontended lock?



      Let's suppose that the cost of an uncontended lock is relevant. Then surely the cost of a contended lock is enormously relevant. You've built a mechanism that potentially contends a lot of threads! What are the alternatives that you considered here? For example, you could avoid the lock altogether by deciding that it is OK for the create function to be called on multiple threads -- perhaps we know that it is cheap and idempotent. Those threads can then race to their heart's content to initialize the field, and we can use an interlocked exchange to ensure that we get a consistent value. That avoids the cost of the lock altogether, but it creates a different kind of cost, and puts a requirement on the caller to pass an idempotent creator.



      Let's consider other aspects of your solution with respect to performance. You allocate the lock object regardless of whether you ever take the lock, and you keep it forever. What's the burden on the garbage collector? What is the impact on collection pressure? These things are all deeply relevant to performance. Again, remember, the assumption here is that we are so worried about the couple of nanoseconds it takes to enter and leave an uncontended lock that we're willing to write a double checked lock. If those nanoseconds are relevant then surely the milliseconds it takes to do an extra collection are incredibly relevant!




      is there anything else I'm missing that I should be concerned about in a high traffic application?




      I don't know how to answer that question.







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited Nov 15 at 19:35

























      answered Nov 15 at 19:09









      Eric Lippert

      12.6k42746




      12.6k42746








      • 11




        mhmm... this sounds sane but I'm still not sure how to apply this knowledge in a real-life scenario or simply to this question - maybe you could add some practical advice? Currently it's more like a rant on the OP for not knowing what they're doing ;-( I don't find this answer very helpful... sorry. It's not the same level as your other posts or blogs.
        – t3chb0t
        Nov 15 at 19:18








      • 8




        @t3chb0t: As for practical advice, I gave the best practical advice I know. Do not attempt to roll your own low-level mechanisms; use mechanisms that were built by experts.
        – Eric Lippert
        Nov 15 at 19:22






      • 6




        @EricLippert You can take code quality seriously and still frame your critique in such a way that doesn't make Dirk feel "burned". The first paragraph is especially condescending in my opinion.
        – CaTs
        Nov 16 at 6:33






      • 14




        @CaTs: That said, I note that you are responding to tone rather than to the factual content. That Dirk and I both lack the knowledge and ability to do this work correctly is a fact. That the best course of action is to use tools correctly that were built by people who are competent is also a fact. If stating those facts emphatically makes people uncomfortable, well, that's a price I'm willing to pay in order to clearly and unequivocally state that rolling your own threading mechanisms is incredibly dangerous and easy to get wrong.
        – Eric Lippert
        Nov 16 at 6:51






      • 7




        The purpose of StackOverflow is to create searchable artifacts - a library of questions and answers that may be useful to future programmers, and hopefully also useful to the original asker. If the CodeReview stack has the same goals, then this is an excellent answer. Considering that this is Eric Lippert (who served on the C# design team), adding a code snippet or other suggestion would have sent the wrong message to future programmers: "If I include this snippet / follow this pattern, then my homebrew thread management solution will be safe and work well - Eric Lippert said so."
        – benj2240
        Nov 16 at 15:22














      • 11




        mhmm... this sounds sane but I'm still not sure how to apply this knowledge in a real-life scenario or simply to this question - maybe you could add some practical advice? Currently it's more like a rant on the OP for not knowing what they're doing ;-( I don't find this answer very helpful... sorry. It's not the same level as your other posts or blogs.
        – t3chb0t
        Nov 15 at 19:18








      • 8




        @t3chb0t: As for practical advice, I gave the best practical advice I know. Do not attempt to roll your own low-level mechanisms; use mechanisms that were built by experts.
        – Eric Lippert
        Nov 15 at 19:22






      • 6




        @EricLippert You can take code quality seriously and still frame your critique in such a way that doesn't make Dirk feel "burned". The first paragraph is especially condescending in my opinion.
        – CaTs
        Nov 16 at 6:33






      • 14




        @CaTs: That said, I note that you are responding to tone rather than to the factual content. That Dirk and I both lack the knowledge and ability to do this work correctly is a fact. That the best course of action is to use tools correctly that were built by people who are competent is also a fact. If stating those facts emphatically makes people uncomfortable, well, that's a price I'm willing to pay in order to clearly and unequivocally state that rolling your own threading mechanisms is incredibly dangerous and easy to get wrong.
        – Eric Lippert
        Nov 16 at 6:51






      • 7




        The purpose of StackOverflow is to create searchable artifacts - a library of questions and answers that may be useful to future programmers, and hopefully also useful to the original asker. If the CodeReview stack has the same goals, then this is an excellent answer. Considering that this is Eric Lippert (who served on the C# design team), adding a code snippet or other suggestion would have sent the wrong message to future programmers: "If I include this snippet / follow this pattern, then my homebrew thread management solution will be safe and work well - Eric Lippert said so."
        – benj2240
        Nov 16 at 15:22








      11




      11




      mhmm... this sounds sane but I'm still not sure how to apply this knowledge in a real-life scenario or simply to this question - maybe you could add some practical advice? Currently it's more like a rant on the OP for not knowing what they're doing ;-( I don't find this answer very helpful... sorry. It's not the same level as your other posts or blogs.
      – t3chb0t
      Nov 15 at 19:18






      mhmm... this sounds sane but I'm still not sure how to apply this knowledge in a real-life scenario or simply to this question - maybe you could add some practical advice? Currently it's more like a rant on the OP for not knowing what they're doing ;-( I don't find this answer very helpful... sorry. It's not the same level as your other posts or blogs.
      – t3chb0t
      Nov 15 at 19:18






      8




      8




      @t3chb0t: As for practical advice, I gave the best practical advice I know. Do not attempt to roll your own low-level mechanisms; use mechanisms that were built by experts.
      – Eric Lippert
      Nov 15 at 19:22




      @t3chb0t: As for practical advice, I gave the best practical advice I know. Do not attempt to roll your own low-level mechanisms; use mechanisms that were built by experts.
      – Eric Lippert
      Nov 15 at 19:22




      6




      6




      @EricLippert You can take code quality seriously and still frame your critique in such a way that doesn't make Dirk feel "burned". The first paragraph is especially condescending in my opinion.
      – CaTs
      Nov 16 at 6:33




      @EricLippert You can take code quality seriously and still frame your critique in such a way that doesn't make Dirk feel "burned". The first paragraph is especially condescending in my opinion.
      – CaTs
      Nov 16 at 6:33




      14




      14




      @CaTs: That said, I note that you are responding to tone rather than to the factual content. That Dirk and I both lack the knowledge and ability to do this work correctly is a fact. That the best course of action is to use tools correctly that were built by people who are competent is also a fact. If stating those facts emphatically makes people uncomfortable, well, that's a price I'm willing to pay in order to clearly and unequivocally state that rolling your own threading mechanisms is incredibly dangerous and easy to get wrong.
      – Eric Lippert
      Nov 16 at 6:51




      @CaTs: That said, I note that you are responding to tone rather than to the factual content. That Dirk and I both lack the knowledge and ability to do this work correctly is a fact. That the best course of action is to use tools correctly that were built by people who are competent is also a fact. If stating those facts emphatically makes people uncomfortable, well, that's a price I'm willing to pay in order to clearly and unequivocally state that rolling your own threading mechanisms is incredibly dangerous and easy to get wrong.
      – Eric Lippert
      Nov 16 at 6:51




      7




      7




      The purpose of StackOverflow is to create searchable artifacts - a library of questions and answers that may be useful to future programmers, and hopefully also useful to the original asker. If the CodeReview stack has the same goals, then this is an excellent answer. Considering that this is Eric Lippert (who served on the C# design team), adding a code snippet or other suggestion would have sent the wrong message to future programmers: "If I include this snippet / follow this pattern, then my homebrew thread management solution will be safe and work well - Eric Lippert said so."
      – benj2240
      Nov 16 at 15:22




      The purpose of StackOverflow is to create searchable artifacts - a library of questions and answers that may be useful to future programmers, and hopefully also useful to the original asker. If the CodeReview stack has the same goals, then this is an excellent answer. Considering that this is Eric Lippert (who served on the C# design team), adding a code snippet or other suggestion would have sent the wrong message to future programmers: "If I include this snippet / follow this pattern, then my homebrew thread management solution will be safe and work well - Eric Lippert said so."
      – benj2240
      Nov 16 at 15:22










      up vote
      13
      down vote














      is there anything else I'm missing that I should be concerned about in a high traffic application?




      Yes, your Lazy<T>.Value isn't generic anymore but an object and if Func<T> returns a value type then a lot of un/boxing will take place. This might hurt performance.



      I think a LazyFactory.GetOrCreate<T>(...) would do a better job.






      share|improve this answer





















      • Hi @t3chb0t, good point & thanks for your feedback. I could solve this with making the class itself actually (optionally) generic.
        – Dirk Boer
        Nov 15 at 19:08












      • About LazyFactory.GetOrCreate<T> wouldn't you still need to put your core logic into the constructor?
        – Dirk Boer
        Nov 15 at 19:08










      • @DirkBoer why is the generic parameter in the Get method rather than the type? It requires the class's consumers to behave well rather than allowing the compiler to enforce good behavior. Consider: string s = _SubEvents.Get(() => "s"); int fortyTwo = _SubEvents.Get(() => 42);
        – phoog
        Nov 15 at 20:15








      • 1




        @phoog It was directed at t3chb0t, but to your point about reducing verbosity, just changing the class declaration to MyLazy<T> and changing private object _Value; to private T _Value; wouldn't increase the verbosity, I don't think.
        – jpmc26
        Nov 15 at 21:58






      • 1




        @phoog If the class name is a major source of boilerplate, the OP surely has a severe problem with their naming techniques. Maybe they should consider namespaces or the aliasing form of using. =) I believe the question is referring to passing in the initialization function, which might be long and complex for legitimate reasons. The example usages of this class do also suggest the OP is passing in long lambdas.
        – jpmc26
        Nov 15 at 22:14

















      up vote
      13
      down vote














      is there anything else I'm missing that I should be concerned about in a high traffic application?




      Yes, your Lazy<T>.Value isn't generic anymore but an object and if Func<T> returns a value type then a lot of un/boxing will take place. This might hurt performance.



      I think a LazyFactory.GetOrCreate<T>(...) would do a better job.






      share|improve this answer





















      • Hi @t3chb0t, good point & thanks for your feedback. I could solve this with making the class itself actually (optionally) generic.
        – Dirk Boer
        Nov 15 at 19:08












      • About LazyFactory.GetOrCreate<T> wouldn't you still need to put your core logic into the constructor?
        – Dirk Boer
        Nov 15 at 19:08










      • @DirkBoer why is the generic parameter in the Get method rather than the type? It requires the class's consumers to behave well rather than allowing the compiler to enforce good behavior. Consider: string s = _SubEvents.Get(() => "s"); int fortyTwo = _SubEvents.Get(() => 42);
        – phoog
        Nov 15 at 20:15








      • 1




        @phoog It was directed at t3chb0t, but to your point about reducing verbosity, just changing the class declaration to MyLazy<T> and changing private object _Value; to private T _Value; wouldn't increase the verbosity, I don't think.
        – jpmc26
        Nov 15 at 21:58






      • 1




        @phoog If the class name is a major source of boilerplate, the OP surely has a severe problem with their naming techniques. Maybe they should consider namespaces or the aliasing form of using. =) I believe the question is referring to passing in the initialization function, which might be long and complex for legitimate reasons. The example usages of this class do also suggest the OP is passing in long lambdas.
        – jpmc26
        Nov 15 at 22:14















      up vote
      13
      down vote










      up vote
      13
      down vote










      is there anything else I'm missing that I should be concerned about in a high traffic application?




      Yes, your Lazy<T>.Value isn't generic anymore but an object and if Func<T> returns a value type then a lot of un/boxing will take place. This might hurt performance.



      I think a LazyFactory.GetOrCreate<T>(...) would do a better job.






      share|improve this answer













      is there anything else I'm missing that I should be concerned about in a high traffic application?




      Yes, your Lazy<T>.Value isn't generic anymore but an object and if Func<T> returns a value type then a lot of un/boxing will take place. This might hurt performance.



      I think a LazyFactory.GetOrCreate<T>(...) would do a better job.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered Nov 15 at 10:31









      t3chb0t

      33.6k744108




      33.6k744108












      • Hi @t3chb0t, good point & thanks for your feedback. I could solve this with making the class itself actually (optionally) generic.
        – Dirk Boer
        Nov 15 at 19:08












      • About LazyFactory.GetOrCreate<T> wouldn't you still need to put your core logic into the constructor?
        – Dirk Boer
        Nov 15 at 19:08










      • @DirkBoer why is the generic parameter in the Get method rather than the type? It requires the class's consumers to behave well rather than allowing the compiler to enforce good behavior. Consider: string s = _SubEvents.Get(() => "s"); int fortyTwo = _SubEvents.Get(() => 42);
        – phoog
        Nov 15 at 20:15








      • 1




        @phoog It was directed at t3chb0t, but to your point about reducing verbosity, just changing the class declaration to MyLazy<T> and changing private object _Value; to private T _Value; wouldn't increase the verbosity, I don't think.
        – jpmc26
        Nov 15 at 21:58






      • 1




        @phoog If the class name is a major source of boilerplate, the OP surely has a severe problem with their naming techniques. Maybe they should consider namespaces or the aliasing form of using. =) I believe the question is referring to passing in the initialization function, which might be long and complex for legitimate reasons. The example usages of this class do also suggest the OP is passing in long lambdas.
        – jpmc26
        Nov 15 at 22:14




















      • Hi @t3chb0t, good point & thanks for your feedback. I could solve this with making the class itself actually (optionally) generic.
        – Dirk Boer
        Nov 15 at 19:08












      • About LazyFactory.GetOrCreate<T> wouldn't you still need to put your core logic into the constructor?
        – Dirk Boer
        Nov 15 at 19:08










      • @DirkBoer why is the generic parameter in the Get method rather than the type? It requires the class's consumers to behave well rather than allowing the compiler to enforce good behavior. Consider: string s = _SubEvents.Get(() => "s"); int fortyTwo = _SubEvents.Get(() => 42);
        – phoog
        Nov 15 at 20:15








      • 1




        @phoog It was directed at t3chb0t, but to your point about reducing verbosity, just changing the class declaration to MyLazy<T> and changing private object _Value; to private T _Value; wouldn't increase the verbosity, I don't think.
        – jpmc26
        Nov 15 at 21:58






      • 1




        @phoog If the class name is a major source of boilerplate, the OP surely has a severe problem with their naming techniques. Maybe they should consider namespaces or the aliasing form of using. =) I believe the question is referring to passing in the initialization function, which might be long and complex for legitimate reasons. The example usages of this class do also suggest the OP is passing in long lambdas.
        – jpmc26
        Nov 15 at 22:14


















      Hi @t3chb0t, good point & thanks for your feedback. I could solve this with making the class itself actually (optionally) generic.
      – Dirk Boer
      Nov 15 at 19:08






      Hi @t3chb0t, good point & thanks for your feedback. I could solve this with making the class itself actually (optionally) generic.
      – Dirk Boer
      Nov 15 at 19:08














      About LazyFactory.GetOrCreate<T> wouldn't you still need to put your core logic into the constructor?
      – Dirk Boer
      Nov 15 at 19:08




      About LazyFactory.GetOrCreate<T> wouldn't you still need to put your core logic into the constructor?
      – Dirk Boer
      Nov 15 at 19:08












      @DirkBoer why is the generic parameter in the Get method rather than the type? It requires the class's consumers to behave well rather than allowing the compiler to enforce good behavior. Consider: string s = _SubEvents.Get(() => "s"); int fortyTwo = _SubEvents.Get(() => 42);
      – phoog
      Nov 15 at 20:15






      @DirkBoer why is the generic parameter in the Get method rather than the type? It requires the class's consumers to behave well rather than allowing the compiler to enforce good behavior. Consider: string s = _SubEvents.Get(() => "s"); int fortyTwo = _SubEvents.Get(() => 42);
      – phoog
      Nov 15 at 20:15






      1




      1




      @phoog It was directed at t3chb0t, but to your point about reducing verbosity, just changing the class declaration to MyLazy<T> and changing private object _Value; to private T _Value; wouldn't increase the verbosity, I don't think.
      – jpmc26
      Nov 15 at 21:58




      @phoog It was directed at t3chb0t, but to your point about reducing verbosity, just changing the class declaration to MyLazy<T> and changing private object _Value; to private T _Value; wouldn't increase the verbosity, I don't think.
      – jpmc26
      Nov 15 at 21:58




      1




      1




      @phoog If the class name is a major source of boilerplate, the OP surely has a severe problem with their naming techniques. Maybe they should consider namespaces or the aliasing form of using. =) I believe the question is referring to passing in the initialization function, which might be long and complex for legitimate reasons. The example usages of this class do also suggest the OP is passing in long lambdas.
      – jpmc26
      Nov 15 at 22:14






      @phoog If the class name is a major source of boilerplate, the OP surely has a severe problem with their naming techniques. Maybe they should consider namespaces or the aliasing form of using. =) I believe the question is referring to passing in the initialization function, which might be long and complex for legitimate reasons. The example usages of this class do also suggest the OP is passing in long lambdas.
      – jpmc26
      Nov 15 at 22:14












      up vote
      10
      down vote














      Meaning that I have to put half of my logic concerning the lazy property in the constructor, and having more boilerplate code.




      This is a little speculative, but I think you have an XY problem. You're trying to reduce boilerplate, but there are probably better ways to do that than what you've suggested.



      If I understand correctly, your problem is that your classes look something like this:



      public class MyClass
      {
      private Lazy<string> _MyStringValue;
      // ...

      public MyClass()
      {
      this._MyStringValue = new Lazy<string>(() => {
      var builder = new StringBuilder();
      builder.Append("a");
      // 50 more lines of expensive construction
      return builder.ToString();
      });

      // 100 more lines constructing OTHER lazy stuff
      }
      }


      Gloss over the details of building up the value; it's just an example. The important point is that you have all this logic here deep in your constructor.



      I think there are two things you can do to alleviate this problem:





      1. Parameterize



        Why put all this logic in the constructor? You're losing a lot of reusablity by doing that anyway. So make these things parameters and construct them elsewhere:



        public class MyClass
        {
        private Lazy<string> _MyStringValue;
        // ...

        public MyClass(Lazy<string> myStringValue)
        {
        this._MyStringValue = myStringValue;
        }
        }



      2. You can embed this construction logic in a method, and then pass the method to the Lazy constructor:



        class MyStringValueMaker
        {
        // Could be an instance method if that's more appropriate.
        // This is just for example
        public static string MakeValue()
        {
        var builder = new StringBuilder();
        builder.Append("a");
        // 50 more lines of expensive construction
        return builder.ToString();
        }
        }


        And then elsewhere:



        var myClass = new MyClass(new Lazy<string>(MyStringValueMaker.MakeValue));



      Now suddenly everything is much better organized, more reusable, and simpler to understand.



      If that's not what your class originally looked like, well, then I think you'd be better off posting a new question asking for a review on the original class to get ideas about how to improve.






      share|improve this answer



























        up vote
        10
        down vote














        Meaning that I have to put half of my logic concerning the lazy property in the constructor, and having more boilerplate code.




        This is a little speculative, but I think you have an XY problem. You're trying to reduce boilerplate, but there are probably better ways to do that than what you've suggested.



        If I understand correctly, your problem is that your classes look something like this:



        public class MyClass
        {
        private Lazy<string> _MyStringValue;
        // ...

        public MyClass()
        {
        this._MyStringValue = new Lazy<string>(() => {
        var builder = new StringBuilder();
        builder.Append("a");
        // 50 more lines of expensive construction
        return builder.ToString();
        });

        // 100 more lines constructing OTHER lazy stuff
        }
        }


        Gloss over the details of building up the value; it's just an example. The important point is that you have all this logic here deep in your constructor.



        I think there are two things you can do to alleviate this problem:





        1. Parameterize



          Why put all this logic in the constructor? You're losing a lot of reusablity by doing that anyway. So make these things parameters and construct them elsewhere:



          public class MyClass
          {
          private Lazy<string> _MyStringValue;
          // ...

          public MyClass(Lazy<string> myStringValue)
          {
          this._MyStringValue = myStringValue;
          }
          }



        2. You can embed this construction logic in a method, and then pass the method to the Lazy constructor:



          class MyStringValueMaker
          {
          // Could be an instance method if that's more appropriate.
          // This is just for example
          public static string MakeValue()
          {
          var builder = new StringBuilder();
          builder.Append("a");
          // 50 more lines of expensive construction
          return builder.ToString();
          }
          }


          And then elsewhere:



          var myClass = new MyClass(new Lazy<string>(MyStringValueMaker.MakeValue));



        Now suddenly everything is much better organized, more reusable, and simpler to understand.



        If that's not what your class originally looked like, well, then I think you'd be better off posting a new question asking for a review on the original class to get ideas about how to improve.






        share|improve this answer

























          up vote
          10
          down vote










          up vote
          10
          down vote










          Meaning that I have to put half of my logic concerning the lazy property in the constructor, and having more boilerplate code.




          This is a little speculative, but I think you have an XY problem. You're trying to reduce boilerplate, but there are probably better ways to do that than what you've suggested.



          If I understand correctly, your problem is that your classes look something like this:



          public class MyClass
          {
          private Lazy<string> _MyStringValue;
          // ...

          public MyClass()
          {
          this._MyStringValue = new Lazy<string>(() => {
          var builder = new StringBuilder();
          builder.Append("a");
          // 50 more lines of expensive construction
          return builder.ToString();
          });

          // 100 more lines constructing OTHER lazy stuff
          }
          }


          Gloss over the details of building up the value; it's just an example. The important point is that you have all this logic here deep in your constructor.



          I think there are two things you can do to alleviate this problem:





          1. Parameterize



            Why put all this logic in the constructor? You're losing a lot of reusablity by doing that anyway. So make these things parameters and construct them elsewhere:



            public class MyClass
            {
            private Lazy<string> _MyStringValue;
            // ...

            public MyClass(Lazy<string> myStringValue)
            {
            this._MyStringValue = myStringValue;
            }
            }



          2. You can embed this construction logic in a method, and then pass the method to the Lazy constructor:



            class MyStringValueMaker
            {
            // Could be an instance method if that's more appropriate.
            // This is just for example
            public static string MakeValue()
            {
            var builder = new StringBuilder();
            builder.Append("a");
            // 50 more lines of expensive construction
            return builder.ToString();
            }
            }


            And then elsewhere:



            var myClass = new MyClass(new Lazy<string>(MyStringValueMaker.MakeValue));



          Now suddenly everything is much better organized, more reusable, and simpler to understand.



          If that's not what your class originally looked like, well, then I think you'd be better off posting a new question asking for a review on the original class to get ideas about how to improve.






          share|improve this answer















          Meaning that I have to put half of my logic concerning the lazy property in the constructor, and having more boilerplate code.




          This is a little speculative, but I think you have an XY problem. You're trying to reduce boilerplate, but there are probably better ways to do that than what you've suggested.



          If I understand correctly, your problem is that your classes look something like this:



          public class MyClass
          {
          private Lazy<string> _MyStringValue;
          // ...

          public MyClass()
          {
          this._MyStringValue = new Lazy<string>(() => {
          var builder = new StringBuilder();
          builder.Append("a");
          // 50 more lines of expensive construction
          return builder.ToString();
          });

          // 100 more lines constructing OTHER lazy stuff
          }
          }


          Gloss over the details of building up the value; it's just an example. The important point is that you have all this logic here deep in your constructor.



          I think there are two things you can do to alleviate this problem:





          1. Parameterize



            Why put all this logic in the constructor? You're losing a lot of reusablity by doing that anyway. So make these things parameters and construct them elsewhere:



            public class MyClass
            {
            private Lazy<string> _MyStringValue;
            // ...

            public MyClass(Lazy<string> myStringValue)
            {
            this._MyStringValue = myStringValue;
            }
            }



          2. You can embed this construction logic in a method, and then pass the method to the Lazy constructor:



            class MyStringValueMaker
            {
            // Could be an instance method if that's more appropriate.
            // This is just for example
            public static string MakeValue()
            {
            var builder = new StringBuilder();
            builder.Append("a");
            // 50 more lines of expensive construction
            return builder.ToString();
            }
            }


            And then elsewhere:



            var myClass = new MyClass(new Lazy<string>(MyStringValueMaker.MakeValue));



          Now suddenly everything is much better organized, more reusable, and simpler to understand.



          If that's not what your class originally looked like, well, then I think you'd be better off posting a new question asking for a review on the original class to get ideas about how to improve.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Nov 15 at 22:19

























          answered Nov 15 at 22:12









          jpmc26

          61938




          61938






















              up vote
              9
              down vote













              I like the idea, but you should carefully explain how this works in comments.



              Try this:



                MyLazy myLazy = new MyLazy();

              int value1 = myLazy.Get(() => 42);
              Console.WriteLine(value1);

              int value2 = myLazy.Get(() => 65);
              Console.WriteLine(value2);


              It correctly prints out:



              42
              42


              But even that we know the answer to everything is 42, it isn't that intuitive. The problem is obviously that you have to - or can - provide a creator function per call to Get<T>(Func<T> creator) and that it is arbitrary, but only the first actually has any effect.






              share|improve this answer



















              • 3




                The weird thing about this implementation is that the following compiles but fails with a runtime exception: MyLazy myLazy = new MyLazy(); int value1 = myLazy.Get(() => 42); Console.WriteLine(value1); string value2 = myLazy.Get(() => "sixty-five"); Console.WriteLine(value2);.
                – phoog
                Nov 15 at 21:00






              • 2




                @phoog: That is probably because Get(() => "sixty-five") gets resolved to Get<string> which then tries to do return (string)(object)42.
                – firda
                Nov 15 at 21:53








              • 2




                @firda yes, that is precisely why. My point with the example is to draw attention to the aspect of the design that is essentially an abuse of the generics system. Calling the property after the value has been initialized requires creating a delegate object that serves no purpose other than identifying the target type for the cast. The point of generics is so these checks can be done at compile time (which should save some nanoseconds at runtime), and that's not happening here.
                – phoog
                Nov 15 at 22:01

















              up vote
              9
              down vote













              I like the idea, but you should carefully explain how this works in comments.



              Try this:



                MyLazy myLazy = new MyLazy();

              int value1 = myLazy.Get(() => 42);
              Console.WriteLine(value1);

              int value2 = myLazy.Get(() => 65);
              Console.WriteLine(value2);


              It correctly prints out:



              42
              42


              But even that we know the answer to everything is 42, it isn't that intuitive. The problem is obviously that you have to - or can - provide a creator function per call to Get<T>(Func<T> creator) and that it is arbitrary, but only the first actually has any effect.






              share|improve this answer



















              • 3




                The weird thing about this implementation is that the following compiles but fails with a runtime exception: MyLazy myLazy = new MyLazy(); int value1 = myLazy.Get(() => 42); Console.WriteLine(value1); string value2 = myLazy.Get(() => "sixty-five"); Console.WriteLine(value2);.
                – phoog
                Nov 15 at 21:00






              • 2




                @phoog: That is probably because Get(() => "sixty-five") gets resolved to Get<string> which then tries to do return (string)(object)42.
                – firda
                Nov 15 at 21:53








              • 2




                @firda yes, that is precisely why. My point with the example is to draw attention to the aspect of the design that is essentially an abuse of the generics system. Calling the property after the value has been initialized requires creating a delegate object that serves no purpose other than identifying the target type for the cast. The point of generics is so these checks can be done at compile time (which should save some nanoseconds at runtime), and that's not happening here.
                – phoog
                Nov 15 at 22:01















              up vote
              9
              down vote










              up vote
              9
              down vote









              I like the idea, but you should carefully explain how this works in comments.



              Try this:



                MyLazy myLazy = new MyLazy();

              int value1 = myLazy.Get(() => 42);
              Console.WriteLine(value1);

              int value2 = myLazy.Get(() => 65);
              Console.WriteLine(value2);


              It correctly prints out:



              42
              42


              But even that we know the answer to everything is 42, it isn't that intuitive. The problem is obviously that you have to - or can - provide a creator function per call to Get<T>(Func<T> creator) and that it is arbitrary, but only the first actually has any effect.






              share|improve this answer














              I like the idea, but you should carefully explain how this works in comments.



              Try this:



                MyLazy myLazy = new MyLazy();

              int value1 = myLazy.Get(() => 42);
              Console.WriteLine(value1);

              int value2 = myLazy.Get(() => 65);
              Console.WriteLine(value2);


              It correctly prints out:



              42
              42


              But even that we know the answer to everything is 42, it isn't that intuitive. The problem is obviously that you have to - or can - provide a creator function per call to Get<T>(Func<T> creator) and that it is arbitrary, but only the first actually has any effect.







              share|improve this answer














              share|improve this answer



              share|improve this answer








              edited Nov 15 at 17:47

























              answered Nov 15 at 17:14









              Henrik Hansen

              6,0081722




              6,0081722








              • 3




                The weird thing about this implementation is that the following compiles but fails with a runtime exception: MyLazy myLazy = new MyLazy(); int value1 = myLazy.Get(() => 42); Console.WriteLine(value1); string value2 = myLazy.Get(() => "sixty-five"); Console.WriteLine(value2);.
                – phoog
                Nov 15 at 21:00






              • 2




                @phoog: That is probably because Get(() => "sixty-five") gets resolved to Get<string> which then tries to do return (string)(object)42.
                – firda
                Nov 15 at 21:53








              • 2




                @firda yes, that is precisely why. My point with the example is to draw attention to the aspect of the design that is essentially an abuse of the generics system. Calling the property after the value has been initialized requires creating a delegate object that serves no purpose other than identifying the target type for the cast. The point of generics is so these checks can be done at compile time (which should save some nanoseconds at runtime), and that's not happening here.
                – phoog
                Nov 15 at 22:01
















              • 3




                The weird thing about this implementation is that the following compiles but fails with a runtime exception: MyLazy myLazy = new MyLazy(); int value1 = myLazy.Get(() => 42); Console.WriteLine(value1); string value2 = myLazy.Get(() => "sixty-five"); Console.WriteLine(value2);.
                – phoog
                Nov 15 at 21:00






              • 2




                @phoog: That is probably because Get(() => "sixty-five") gets resolved to Get<string> which then tries to do return (string)(object)42.
                – firda
                Nov 15 at 21:53








              • 2




                @firda yes, that is precisely why. My point with the example is to draw attention to the aspect of the design that is essentially an abuse of the generics system. Calling the property after the value has been initialized requires creating a delegate object that serves no purpose other than identifying the target type for the cast. The point of generics is so these checks can be done at compile time (which should save some nanoseconds at runtime), and that's not happening here.
                – phoog
                Nov 15 at 22:01










              3




              3




              The weird thing about this implementation is that the following compiles but fails with a runtime exception: MyLazy myLazy = new MyLazy(); int value1 = myLazy.Get(() => 42); Console.WriteLine(value1); string value2 = myLazy.Get(() => "sixty-five"); Console.WriteLine(value2);.
              – phoog
              Nov 15 at 21:00




              The weird thing about this implementation is that the following compiles but fails with a runtime exception: MyLazy myLazy = new MyLazy(); int value1 = myLazy.Get(() => 42); Console.WriteLine(value1); string value2 = myLazy.Get(() => "sixty-five"); Console.WriteLine(value2);.
              – phoog
              Nov 15 at 21:00




              2




              2




              @phoog: That is probably because Get(() => "sixty-five") gets resolved to Get<string> which then tries to do return (string)(object)42.
              – firda
              Nov 15 at 21:53






              @phoog: That is probably because Get(() => "sixty-five") gets resolved to Get<string> which then tries to do return (string)(object)42.
              – firda
              Nov 15 at 21:53






              2




              2




              @firda yes, that is precisely why. My point with the example is to draw attention to the aspect of the design that is essentially an abuse of the generics system. Calling the property after the value has been initialized requires creating a delegate object that serves no purpose other than identifying the target type for the cast. The point of generics is so these checks can be done at compile time (which should save some nanoseconds at runtime), and that's not happening here.
              – phoog
              Nov 15 at 22:01






              @firda yes, that is precisely why. My point with the example is to draw attention to the aspect of the design that is essentially an abuse of the generics system. Calling the property after the value has been initialized requires creating a delegate object that serves no purpose other than identifying the target type for the cast. The point of generics is so these checks can be done at compile time (which should save some nanoseconds at runtime), and that's not happening here.
              – phoog
              Nov 15 at 22:01












              up vote
              6
              down vote














              is there anything else I'm missing that I should be concerned about in a high traffic application?




              By passing the delegate in the Get method, you're instantiating a delegate object each time you call the property. System.Lazy<T> creates the delegate instance only once.






              share|improve this answer

























                up vote
                6
                down vote














                is there anything else I'm missing that I should be concerned about in a high traffic application?




                By passing the delegate in the Get method, you're instantiating a delegate object each time you call the property. System.Lazy<T> creates the delegate instance only once.






                share|improve this answer























                  up vote
                  6
                  down vote










                  up vote
                  6
                  down vote










                  is there anything else I'm missing that I should be concerned about in a high traffic application?




                  By passing the delegate in the Get method, you're instantiating a delegate object each time you call the property. System.Lazy<T> creates the delegate instance only once.






                  share|improve this answer













                  is there anything else I'm missing that I should be concerned about in a high traffic application?




                  By passing the delegate in the Get method, you're instantiating a delegate object each time you call the property. System.Lazy<T> creates the delegate instance only once.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Nov 15 at 21:25









                  phoog

                  23614




                  23614






















                      up vote
                      6
                      down vote














                      is the current implementation 'safe'?




                      No it isn't, because:




                      1. You did not implement Double-checked locking correctly - you have two fields (_Value and _Loaded) instead of only one.

                      2. You have added new feature - Invalidate - that invalides the correctness of double-checked locking even if you fix previous problem (by e.g. boxing the value).


                      Lessons to learn




                      1. Always prefer well-known implementations (e.g. System.Lazy<T> or System.Threading.LazyInitializer) over your own - thread/process synchronization and cryptography are two heaviest topics to master, do not expect that you will be able to design these things yourself in a day, it may take years to master!

                      2. Benchmark/profile before you optimize - lock is often good enough and you can try e.g. System.Threading.SemaphoreSlim or ReaderWriterLockSlim to speed it up a bit, but beware that it could get even worse - so again: test and measure first, be clever if you need to, be lazy if you can.

                      3. If you still wish to write your own version then at least remember:


                        1. Reads and writes can be reordered in any way (unless they depend on each other like e.g. in assignment a = b + c - the order of fetching b and c is not guaranteed, but write to a has to be done after the computation). Be extra cautious when synchronization involves more than one variable! You will likely think that it works because you do things in some order, but that is wrong! The order is not guaranteed across threads!


                        2. volatile only guarantess the order of writes, not that other threads would see them immediately. EDIT: As Voo pointed out, the documentation there (from Microsoft!) apears to be incomplete. Language specification (10.5.3) states that volatile reads have acquire semantics and volatile writes have release semantics. Personal note: how is one supposed to get this right, if you cannot even trust the documentation?! But I found Volatile.Read which gives very strong guarantees (in documentation): Returns: The value that was read. This value is the latest written by any processor in the computer, regardless of the number of processors or the state of processor cache. System.Threading.Interlocked have some good methods too. EDIT: Be warned that .NET prior 4.0 (yes 3.5 and older = engine 2.0 and older) is flawed and can reorder volatile reads even if it should not => do not trust volatile unless you are an expert (know for sure what are you doing or just experimenting, do not use volatile for production code, you have been warned, see the chat: https://chat.stackexchange.com/rooms/85938/lazyness).

                        3. I am not an expert ;)






                      EDIT: Seeing your third attempt I will try to give you my version, but I repeat: I am not an expert ;)



                      public class MyLazy<T>
                      {
                      private class Box
                      {
                      public T Value { get; }
                      public Box(T value) => Value = value;
                      }

                      private Box box;
                      private Func<T> factory;
                      private readonly object guard = new object();

                      public MyLazy(Func<T> factory) => this.factory = factory;

                      public T Value
                      {
                      get
                      {
                      var box = Volatile.Read(ref this.box);
                      return box != null ? box.Value : GetOrCreate();
                      }
                      }

                      private T GetOrCreate()
                      {
                      lock (guard)
                      {
                      var box = Volatile.Read(ref this.box);
                      if (box != null) return box.Value;
                      box = new Box(factory());
                      Volatile.Write(ref this.box, box);
                      return box.Value;
                      }
                      }

                      public void Invalidate() => Volatile.Write(ref this.box, null);
                      public void Invalidate(Func<T> factory) // bonus
                      {
                      lock (guard)
                      {
                      this.factory = factory;
                      Volatile.Write(ref this.box, null);
                      }
                      }
                      }




                      Important comments before I react to Theraot's answer




                      1. I did not use lock in Invalidate() because although you could potentially call it just before GetOrCreate calls its Volatile.Write in different thread, there is no real conflict, it behaves in such a way, that Invalidate() was either called before volatile read or after volatile write (behaves like before volatile read even if in-between). Read more about acquire/release to understand what I just wrote.

                      2. It was not clear how many times Invalidate can be called (and the value re-created). I wrote it in such a way, that you can actually construct the value multiple-times, even changing the function (bonus Invalidate(Func...) ...already there before this edit). I was thinking that this could actually be very useful feature - to have set and lazy get either by allowing the factory to create different things and/or changing the factory as part of Invalidate. Maybe not inteded by OP, but I was certainly thinking about how I could actually use this.

                      3. I took phoog's answer as granted - it really is not a good idea to create the delegate on every access. There are even more possible problems you can read about in our (my and phoog's) comments under Henrik's answer (The API binds the type with the call allowing Get(() => 42) and later Get(() => "surprise"))


                      Reaction to Theraot's answer



                      OP's second and (EDIT: fourth - yes contains a flaw not returning from ExecutionAndPublication) version appear correct to me, because of acquire/release semantics (thanks go to Voo as already mentioned earlier).



                      Two important comments to Theraot:




                      1. What is the demonstration trying to proove? You are calling Init twice without Invalidate in between, which means that you are writing to _data when _initialized is already true. I am also surprised that it does not fail in .NET Core, it should, because the failure is not because of volatile but because of double-init (overwriting _data when _initialized signals it is valid).

                      2. As for volatile Framework vs. Core, I am not sure here, but my feeling about it all is that original description of volatile was that compiler(s) (IL+JIT) won't reorder reads/writes, because it was designed for Windows and x86 (strong memory model), but that it was later redesigned for ARM (weak memory model) changing the specification to acquire/release, which were originally implicit (x86's strong memory model). Original definition did not guarantee anything about CPU and cache, but was assumend and specs got upgraded later. I am not a language lawyer ;)


                      Final note



                      I am (probably) not going to update my code for single-init single-invalide scenairo and I am trully looking forward to somebody resolve the mess about volatile vs. .NET 1.1 vs. .NET 2.0+ vs. .NET Core or where it got changed. I myself am here to learn something (and help where I can).



                      EDIT: We agreed that acquire/relese semantics hold, but only since .NET 4.0 (or 4.5?) ... see the chat: https://chat.stackexchange.com/rooms/85938/lazyness
                      ... until proved otherwise. Use framework-provided API!



                      Personal note: I am glad I have learned something here, that was the second time (since reflection cache - field/property reordering). Thank you all for being (what you are).






                      share|improve this answer























                      • @DirkBoer: And your second version isn't safe either, for reasons you may find in this answer (two variables + volatile + Invalidate).
                        – firda
                        Nov 16 at 17:32










                      • I would change "I am not an expert" to "We are not experts".
                        – jpmc26
                        2 days ago










                      • @jpmc26: "We" is too broad, "You and me are not experts, at least not yet" ... which reminds me of: "The more you know the more you know how much you don't know". Anyway, I just wanted to say: this list is in no way complete, you won't become an expert just by reading it ;) (and I may even be wrong in something). Some other links: SO: Volatile vs. Interlocked vs. lock and System.Threading.Interlocked
                        – firda
                        2 days ago










                      • As we already discussed in the other comments, point 2 is very misleading: volatile guarantees a great deal more than just the order of writes. It gives visibility and ordering guarantees. Which also influences point 1 - volatile is essential to get the necessary ordering guarantees to make multi variable things work.. but you can make it work.
                        – Voo
                        2 days ago








                      • 1




                        Note about why I did not use lock in Invalidate(): you could potentially call it just before GetOrCreate calls its Volatile.Write in different thread, but there is no real conflict, it behaves in such a way, that Invalidate() was either called before volatile read or after volatile write (behaves like before volatile read even if in-between). But feel free to add the lock for extra safe, read more about acquire/release to understand what I just wrote.
                        – firda
                        yesterday















                      up vote
                      6
                      down vote














                      is the current implementation 'safe'?




                      No it isn't, because:




                      1. You did not implement Double-checked locking correctly - you have two fields (_Value and _Loaded) instead of only one.

                      2. You have added new feature - Invalidate - that invalides the correctness of double-checked locking even if you fix previous problem (by e.g. boxing the value).


                      Lessons to learn




                      1. Always prefer well-known implementations (e.g. System.Lazy<T> or System.Threading.LazyInitializer) over your own - thread/process synchronization and cryptography are two heaviest topics to master, do not expect that you will be able to design these things yourself in a day, it may take years to master!

                      2. Benchmark/profile before you optimize - lock is often good enough and you can try e.g. System.Threading.SemaphoreSlim or ReaderWriterLockSlim to speed it up a bit, but beware that it could get even worse - so again: test and measure first, be clever if you need to, be lazy if you can.

                      3. If you still wish to write your own version then at least remember:


                        1. Reads and writes can be reordered in any way (unless they depend on each other like e.g. in assignment a = b + c - the order of fetching b and c is not guaranteed, but write to a has to be done after the computation). Be extra cautious when synchronization involves more than one variable! You will likely think that it works because you do things in some order, but that is wrong! The order is not guaranteed across threads!


                        2. volatile only guarantess the order of writes, not that other threads would see them immediately. EDIT: As Voo pointed out, the documentation there (from Microsoft!) apears to be incomplete. Language specification (10.5.3) states that volatile reads have acquire semantics and volatile writes have release semantics. Personal note: how is one supposed to get this right, if you cannot even trust the documentation?! But I found Volatile.Read which gives very strong guarantees (in documentation): Returns: The value that was read. This value is the latest written by any processor in the computer, regardless of the number of processors or the state of processor cache. System.Threading.Interlocked have some good methods too. EDIT: Be warned that .NET prior 4.0 (yes 3.5 and older = engine 2.0 and older) is flawed and can reorder volatile reads even if it should not => do not trust volatile unless you are an expert (know for sure what are you doing or just experimenting, do not use volatile for production code, you have been warned, see the chat: https://chat.stackexchange.com/rooms/85938/lazyness).

                        3. I am not an expert ;)






                      EDIT: Seeing your third attempt I will try to give you my version, but I repeat: I am not an expert ;)



                      public class MyLazy<T>
                      {
                      private class Box
                      {
                      public T Value { get; }
                      public Box(T value) => Value = value;
                      }

                      private Box box;
                      private Func<T> factory;
                      private readonly object guard = new object();

                      public MyLazy(Func<T> factory) => this.factory = factory;

                      public T Value
                      {
                      get
                      {
                      var box = Volatile.Read(ref this.box);
                      return box != null ? box.Value : GetOrCreate();
                      }
                      }

                      private T GetOrCreate()
                      {
                      lock (guard)
                      {
                      var box = Volatile.Read(ref this.box);
                      if (box != null) return box.Value;
                      box = new Box(factory());
                      Volatile.Write(ref this.box, box);
                      return box.Value;
                      }
                      }

                      public void Invalidate() => Volatile.Write(ref this.box, null);
                      public void Invalidate(Func<T> factory) // bonus
                      {
                      lock (guard)
                      {
                      this.factory = factory;
                      Volatile.Write(ref this.box, null);
                      }
                      }
                      }




                      Important comments before I react to Theraot's answer




                      1. I did not use lock in Invalidate() because although you could potentially call it just before GetOrCreate calls its Volatile.Write in different thread, there is no real conflict, it behaves in such a way, that Invalidate() was either called before volatile read or after volatile write (behaves like before volatile read even if in-between). Read more about acquire/release to understand what I just wrote.

                      2. It was not clear how many times Invalidate can be called (and the value re-created). I wrote it in such a way, that you can actually construct the value multiple-times, even changing the function (bonus Invalidate(Func...) ...already there before this edit). I was thinking that this could actually be very useful feature - to have set and lazy get either by allowing the factory to create different things and/or changing the factory as part of Invalidate. Maybe not inteded by OP, but I was certainly thinking about how I could actually use this.

                      3. I took phoog's answer as granted - it really is not a good idea to create the delegate on every access. There are even more possible problems you can read about in our (my and phoog's) comments under Henrik's answer (The API binds the type with the call allowing Get(() => 42) and later Get(() => "surprise"))


                      Reaction to Theraot's answer



                      OP's second and (EDIT: fourth - yes contains a flaw not returning from ExecutionAndPublication) version appear correct to me, because of acquire/release semantics (thanks go to Voo as already mentioned earlier).



                      Two important comments to Theraot:




                      1. What is the demonstration trying to proove? You are calling Init twice without Invalidate in between, which means that you are writing to _data when _initialized is already true. I am also surprised that it does not fail in .NET Core, it should, because the failure is not because of volatile but because of double-init (overwriting _data when _initialized signals it is valid).

                      2. As for volatile Framework vs. Core, I am not sure here, but my feeling about it all is that original description of volatile was that compiler(s) (IL+JIT) won't reorder reads/writes, because it was designed for Windows and x86 (strong memory model), but that it was later redesigned for ARM (weak memory model) changing the specification to acquire/release, which were originally implicit (x86's strong memory model). Original definition did not guarantee anything about CPU and cache, but was assumend and specs got upgraded later. I am not a language lawyer ;)


                      Final note



                      I am (probably) not going to update my code for single-init single-invalide scenairo and I am trully looking forward to somebody resolve the mess about volatile vs. .NET 1.1 vs. .NET 2.0+ vs. .NET Core or where it got changed. I myself am here to learn something (and help where I can).



                      EDIT: We agreed that acquire/relese semantics hold, but only since .NET 4.0 (or 4.5?) ... see the chat: https://chat.stackexchange.com/rooms/85938/lazyness
                      ... until proved otherwise. Use framework-provided API!



                      Personal note: I am glad I have learned something here, that was the second time (since reflection cache - field/property reordering). Thank you all for being (what you are).






                      share|improve this answer























                      • @DirkBoer: And your second version isn't safe either, for reasons you may find in this answer (two variables + volatile + Invalidate).
                        – firda
                        Nov 16 at 17:32










                      • I would change "I am not an expert" to "We are not experts".
                        – jpmc26
                        2 days ago










                      • @jpmc26: "We" is too broad, "You and me are not experts, at least not yet" ... which reminds me of: "The more you know the more you know how much you don't know". Anyway, I just wanted to say: this list is in no way complete, you won't become an expert just by reading it ;) (and I may even be wrong in something). Some other links: SO: Volatile vs. Interlocked vs. lock and System.Threading.Interlocked
                        – firda
                        2 days ago










                      • As we already discussed in the other comments, point 2 is very misleading: volatile guarantees a great deal more than just the order of writes. It gives visibility and ordering guarantees. Which also influences point 1 - volatile is essential to get the necessary ordering guarantees to make multi variable things work.. but you can make it work.
                        – Voo
                        2 days ago








                      • 1




                        Note about why I did not use lock in Invalidate(): you could potentially call it just before GetOrCreate calls its Volatile.Write in different thread, but there is no real conflict, it behaves in such a way, that Invalidate() was either called before volatile read or after volatile write (behaves like before volatile read even if in-between). But feel free to add the lock for extra safe, read more about acquire/release to understand what I just wrote.
                        – firda
                        yesterday













                      up vote
                      6
                      down vote










                      up vote
                      6
                      down vote










                      is the current implementation 'safe'?




                      No it isn't, because:




                      1. You did not implement Double-checked locking correctly - you have two fields (_Value and _Loaded) instead of only one.

                      2. You have added new feature - Invalidate - that invalides the correctness of double-checked locking even if you fix previous problem (by e.g. boxing the value).


                      Lessons to learn




                      1. Always prefer well-known implementations (e.g. System.Lazy<T> or System.Threading.LazyInitializer) over your own - thread/process synchronization and cryptography are two heaviest topics to master, do not expect that you will be able to design these things yourself in a day, it may take years to master!

                      2. Benchmark/profile before you optimize - lock is often good enough and you can try e.g. System.Threading.SemaphoreSlim or ReaderWriterLockSlim to speed it up a bit, but beware that it could get even worse - so again: test and measure first, be clever if you need to, be lazy if you can.

                      3. If you still wish to write your own version then at least remember:


                        1. Reads and writes can be reordered in any way (unless they depend on each other like e.g. in assignment a = b + c - the order of fetching b and c is not guaranteed, but write to a has to be done after the computation). Be extra cautious when synchronization involves more than one variable! You will likely think that it works because you do things in some order, but that is wrong! The order is not guaranteed across threads!


                        2. volatile only guarantess the order of writes, not that other threads would see them immediately. EDIT: As Voo pointed out, the documentation there (from Microsoft!) apears to be incomplete. Language specification (10.5.3) states that volatile reads have acquire semantics and volatile writes have release semantics. Personal note: how is one supposed to get this right, if you cannot even trust the documentation?! But I found Volatile.Read which gives very strong guarantees (in documentation): Returns: The value that was read. This value is the latest written by any processor in the computer, regardless of the number of processors or the state of processor cache. System.Threading.Interlocked have some good methods too. EDIT: Be warned that .NET prior 4.0 (yes 3.5 and older = engine 2.0 and older) is flawed and can reorder volatile reads even if it should not => do not trust volatile unless you are an expert (know for sure what are you doing or just experimenting, do not use volatile for production code, you have been warned, see the chat: https://chat.stackexchange.com/rooms/85938/lazyness).

                        3. I am not an expert ;)






                      EDIT: Seeing your third attempt I will try to give you my version, but I repeat: I am not an expert ;)



                      public class MyLazy<T>
                      {
                      private class Box
                      {
                      public T Value { get; }
                      public Box(T value) => Value = value;
                      }

                      private Box box;
                      private Func<T> factory;
                      private readonly object guard = new object();

                      public MyLazy(Func<T> factory) => this.factory = factory;

                      public T Value
                      {
                      get
                      {
                      var box = Volatile.Read(ref this.box);
                      return box != null ? box.Value : GetOrCreate();
                      }
                      }

                      private T GetOrCreate()
                      {
                      lock (guard)
                      {
                      var box = Volatile.Read(ref this.box);
                      if (box != null) return box.Value;
                      box = new Box(factory());
                      Volatile.Write(ref this.box, box);
                      return box.Value;
                      }
                      }

                      public void Invalidate() => Volatile.Write(ref this.box, null);
                      public void Invalidate(Func<T> factory) // bonus
                      {
                      lock (guard)
                      {
                      this.factory = factory;
                      Volatile.Write(ref this.box, null);
                      }
                      }
                      }




                      Important comments before I react to Theraot's answer




                      1. I did not use lock in Invalidate() because although you could potentially call it just before GetOrCreate calls its Volatile.Write in different thread, there is no real conflict, it behaves in such a way, that Invalidate() was either called before volatile read or after volatile write (behaves like before volatile read even if in-between). Read more about acquire/release to understand what I just wrote.

                      2. It was not clear how many times Invalidate can be called (and the value re-created). I wrote it in such a way, that you can actually construct the value multiple-times, even changing the function (bonus Invalidate(Func...) ...already there before this edit). I was thinking that this could actually be very useful feature - to have set and lazy get either by allowing the factory to create different things and/or changing the factory as part of Invalidate. Maybe not inteded by OP, but I was certainly thinking about how I could actually use this.

                      3. I took phoog's answer as granted - it really is not a good idea to create the delegate on every access. There are even more possible problems you can read about in our (my and phoog's) comments under Henrik's answer (The API binds the type with the call allowing Get(() => 42) and later Get(() => "surprise"))


                      Reaction to Theraot's answer



                      OP's second and (EDIT: fourth - yes contains a flaw not returning from ExecutionAndPublication) version appear correct to me, because of acquire/release semantics (thanks go to Voo as already mentioned earlier).



                      Two important comments to Theraot:




                      1. What is the demonstration trying to proove? You are calling Init twice without Invalidate in between, which means that you are writing to _data when _initialized is already true. I am also surprised that it does not fail in .NET Core, it should, because the failure is not because of volatile but because of double-init (overwriting _data when _initialized signals it is valid).

                      2. As for volatile Framework vs. Core, I am not sure here, but my feeling about it all is that original description of volatile was that compiler(s) (IL+JIT) won't reorder reads/writes, because it was designed for Windows and x86 (strong memory model), but that it was later redesigned for ARM (weak memory model) changing the specification to acquire/release, which were originally implicit (x86's strong memory model). Original definition did not guarantee anything about CPU and cache, but was assumend and specs got upgraded later. I am not a language lawyer ;)


                      Final note



                      I am (probably) not going to update my code for single-init single-invalide scenairo and I am trully looking forward to somebody resolve the mess about volatile vs. .NET 1.1 vs. .NET 2.0+ vs. .NET Core or where it got changed. I myself am here to learn something (and help where I can).



                      EDIT: We agreed that acquire/relese semantics hold, but only since .NET 4.0 (or 4.5?) ... see the chat: https://chat.stackexchange.com/rooms/85938/lazyness
                      ... until proved otherwise. Use framework-provided API!



                      Personal note: I am glad I have learned something here, that was the second time (since reflection cache - field/property reordering). Thank you all for being (what you are).






                      share|improve this answer















                      is the current implementation 'safe'?




                      No it isn't, because:




                      1. You did not implement Double-checked locking correctly - you have two fields (_Value and _Loaded) instead of only one.

                      2. You have added new feature - Invalidate - that invalides the correctness of double-checked locking even if you fix previous problem (by e.g. boxing the value).


                      Lessons to learn




                      1. Always prefer well-known implementations (e.g. System.Lazy<T> or System.Threading.LazyInitializer) over your own - thread/process synchronization and cryptography are two heaviest topics to master, do not expect that you will be able to design these things yourself in a day, it may take years to master!

                      2. Benchmark/profile before you optimize - lock is often good enough and you can try e.g. System.Threading.SemaphoreSlim or ReaderWriterLockSlim to speed it up a bit, but beware that it could get even worse - so again: test and measure first, be clever if you need to, be lazy if you can.

                      3. If you still wish to write your own version then at least remember:


                        1. Reads and writes can be reordered in any way (unless they depend on each other like e.g. in assignment a = b + c - the order of fetching b and c is not guaranteed, but write to a has to be done after the computation). Be extra cautious when synchronization involves more than one variable! You will likely think that it works because you do things in some order, but that is wrong! The order is not guaranteed across threads!


                        2. volatile only guarantess the order of writes, not that other threads would see them immediately. EDIT: As Voo pointed out, the documentation there (from Microsoft!) apears to be incomplete. Language specification (10.5.3) states that volatile reads have acquire semantics and volatile writes have release semantics. Personal note: how is one supposed to get this right, if you cannot even trust the documentation?! But I found Volatile.Read which gives very strong guarantees (in documentation): Returns: The value that was read. This value is the latest written by any processor in the computer, regardless of the number of processors or the state of processor cache. System.Threading.Interlocked have some good methods too. EDIT: Be warned that .NET prior 4.0 (yes 3.5 and older = engine 2.0 and older) is flawed and can reorder volatile reads even if it should not => do not trust volatile unless you are an expert (know for sure what are you doing or just experimenting, do not use volatile for production code, you have been warned, see the chat: https://chat.stackexchange.com/rooms/85938/lazyness).

                        3. I am not an expert ;)






                      EDIT: Seeing your third attempt I will try to give you my version, but I repeat: I am not an expert ;)



                      public class MyLazy<T>
                      {
                      private class Box
                      {
                      public T Value { get; }
                      public Box(T value) => Value = value;
                      }

                      private Box box;
                      private Func<T> factory;
                      private readonly object guard = new object();

                      public MyLazy(Func<T> factory) => this.factory = factory;

                      public T Value
                      {
                      get
                      {
                      var box = Volatile.Read(ref this.box);
                      return box != null ? box.Value : GetOrCreate();
                      }
                      }

                      private T GetOrCreate()
                      {
                      lock (guard)
                      {
                      var box = Volatile.Read(ref this.box);
                      if (box != null) return box.Value;
                      box = new Box(factory());
                      Volatile.Write(ref this.box, box);
                      return box.Value;
                      }
                      }

                      public void Invalidate() => Volatile.Write(ref this.box, null);
                      public void Invalidate(Func<T> factory) // bonus
                      {
                      lock (guard)
                      {
                      this.factory = factory;
                      Volatile.Write(ref this.box, null);
                      }
                      }
                      }




                      Important comments before I react to Theraot's answer




                      1. I did not use lock in Invalidate() because although you could potentially call it just before GetOrCreate calls its Volatile.Write in different thread, there is no real conflict, it behaves in such a way, that Invalidate() was either called before volatile read or after volatile write (behaves like before volatile read even if in-between). Read more about acquire/release to understand what I just wrote.

                      2. It was not clear how many times Invalidate can be called (and the value re-created). I wrote it in such a way, that you can actually construct the value multiple-times, even changing the function (bonus Invalidate(Func...) ...already there before this edit). I was thinking that this could actually be very useful feature - to have set and lazy get either by allowing the factory to create different things and/or changing the factory as part of Invalidate. Maybe not inteded by OP, but I was certainly thinking about how I could actually use this.

                      3. I took phoog's answer as granted - it really is not a good idea to create the delegate on every access. There are even more possible problems you can read about in our (my and phoog's) comments under Henrik's answer (The API binds the type with the call allowing Get(() => 42) and later Get(() => "surprise"))


                      Reaction to Theraot's answer



                      OP's second and (EDIT: fourth - yes contains a flaw not returning from ExecutionAndPublication) version appear correct to me, because of acquire/release semantics (thanks go to Voo as already mentioned earlier).



                      Two important comments to Theraot:




                      1. What is the demonstration trying to proove? You are calling Init twice without Invalidate in between, which means that you are writing to _data when _initialized is already true. I am also surprised that it does not fail in .NET Core, it should, because the failure is not because of volatile but because of double-init (overwriting _data when _initialized signals it is valid).

                      2. As for volatile Framework vs. Core, I am not sure here, but my feeling about it all is that original description of volatile was that compiler(s) (IL+JIT) won't reorder reads/writes, because it was designed for Windows and x86 (strong memory model), but that it was later redesigned for ARM (weak memory model) changing the specification to acquire/release, which were originally implicit (x86's strong memory model). Original definition did not guarantee anything about CPU and cache, but was assumend and specs got upgraded later. I am not a language lawyer ;)


                      Final note



                      I am (probably) not going to update my code for single-init single-invalide scenairo and I am trully looking forward to somebody resolve the mess about volatile vs. .NET 1.1 vs. .NET 2.0+ vs. .NET Core or where it got changed. I myself am here to learn something (and help where I can).



                      EDIT: We agreed that acquire/relese semantics hold, but only since .NET 4.0 (or 4.5?) ... see the chat: https://chat.stackexchange.com/rooms/85938/lazyness
                      ... until proved otherwise. Use framework-provided API!



                      Personal note: I am glad I have learned something here, that was the second time (since reflection cache - field/property reordering). Thank you all for being (what you are).







                      share|improve this answer














                      share|improve this answer



                      share|improve this answer








                      edited 10 hours ago

























                      answered Nov 16 at 17:11









                      firda

                      2,762627




                      2,762627












                      • @DirkBoer: And your second version isn't safe either, for reasons you may find in this answer (two variables + volatile + Invalidate).
                        – firda
                        Nov 16 at 17:32










                      • I would change "I am not an expert" to "We are not experts".
                        – jpmc26
                        2 days ago










                      • @jpmc26: "We" is too broad, "You and me are not experts, at least not yet" ... which reminds me of: "The more you know the more you know how much you don't know". Anyway, I just wanted to say: this list is in no way complete, you won't become an expert just by reading it ;) (and I may even be wrong in something). Some other links: SO: Volatile vs. Interlocked vs. lock and System.Threading.Interlocked
                        – firda
                        2 days ago










                      • As we already discussed in the other comments, point 2 is very misleading: volatile guarantees a great deal more than just the order of writes. It gives visibility and ordering guarantees. Which also influences point 1 - volatile is essential to get the necessary ordering guarantees to make multi variable things work.. but you can make it work.
                        – Voo
                        2 days ago








                      • 1




                        Note about why I did not use lock in Invalidate(): you could potentially call it just before GetOrCreate calls its Volatile.Write in different thread, but there is no real conflict, it behaves in such a way, that Invalidate() was either called before volatile read or after volatile write (behaves like before volatile read even if in-between). But feel free to add the lock for extra safe, read more about acquire/release to understand what I just wrote.
                        – firda
                        yesterday


















                      • @DirkBoer: And your second version isn't safe either, for reasons you may find in this answer (two variables + volatile + Invalidate).
                        – firda
                        Nov 16 at 17:32










                      • I would change "I am not an expert" to "We are not experts".
                        – jpmc26
                        2 days ago










                      • @jpmc26: "We" is too broad, "You and me are not experts, at least not yet" ... which reminds me of: "The more you know the more you know how much you don't know". Anyway, I just wanted to say: this list is in no way complete, you won't become an expert just by reading it ;) (and I may even be wrong in something). Some other links: SO: Volatile vs. Interlocked vs. lock and System.Threading.Interlocked
                        – firda
                        2 days ago










                      • As we already discussed in the other comments, point 2 is very misleading: volatile guarantees a great deal more than just the order of writes. It gives visibility and ordering guarantees. Which also influences point 1 - volatile is essential to get the necessary ordering guarantees to make multi variable things work.. but you can make it work.
                        – Voo
                        2 days ago








                      • 1




                        Note about why I did not use lock in Invalidate(): you could potentially call it just before GetOrCreate calls its Volatile.Write in different thread, but there is no real conflict, it behaves in such a way, that Invalidate() was either called before volatile read or after volatile write (behaves like before volatile read even if in-between). But feel free to add the lock for extra safe, read more about acquire/release to understand what I just wrote.
                        – firda
                        yesterday
















                      @DirkBoer: And your second version isn't safe either, for reasons you may find in this answer (two variables + volatile + Invalidate).
                      – firda
                      Nov 16 at 17:32




                      @DirkBoer: And your second version isn't safe either, for reasons you may find in this answer (two variables + volatile + Invalidate).
                      – firda
                      Nov 16 at 17:32












                      I would change "I am not an expert" to "We are not experts".
                      – jpmc26
                      2 days ago




                      I would change "I am not an expert" to "We are not experts".
                      – jpmc26
                      2 days ago












                      @jpmc26: "We" is too broad, "You and me are not experts, at least not yet" ... which reminds me of: "The more you know the more you know how much you don't know". Anyway, I just wanted to say: this list is in no way complete, you won't become an expert just by reading it ;) (and I may even be wrong in something). Some other links: SO: Volatile vs. Interlocked vs. lock and System.Threading.Interlocked
                      – firda
                      2 days ago




                      @jpmc26: "We" is too broad, "You and me are not experts, at least not yet" ... which reminds me of: "The more you know the more you know how much you don't know". Anyway, I just wanted to say: this list is in no way complete, you won't become an expert just by reading it ;) (and I may even be wrong in something). Some other links: SO: Volatile vs. Interlocked vs. lock and System.Threading.Interlocked
                      – firda
                      2 days ago












                      As we already discussed in the other comments, point 2 is very misleading: volatile guarantees a great deal more than just the order of writes. It gives visibility and ordering guarantees. Which also influences point 1 - volatile is essential to get the necessary ordering guarantees to make multi variable things work.. but you can make it work.
                      – Voo
                      2 days ago






                      As we already discussed in the other comments, point 2 is very misleading: volatile guarantees a great deal more than just the order of writes. It gives visibility and ordering guarantees. Which also influences point 1 - volatile is essential to get the necessary ordering guarantees to make multi variable things work.. but you can make it work.
                      – Voo
                      2 days ago






                      1




                      1




                      Note about why I did not use lock in Invalidate(): you could potentially call it just before GetOrCreate calls its Volatile.Write in different thread, but there is no real conflict, it behaves in such a way, that Invalidate() was either called before volatile read or after volatile write (behaves like before volatile read even if in-between). But feel free to add the lock for extra safe, read more about acquire/release to understand what I just wrote.
                      – firda
                      yesterday




                      Note about why I did not use lock in Invalidate(): you could potentially call it just before GetOrCreate calls its Volatile.Write in different thread, but there is no real conflict, it behaves in such a way, that Invalidate() was either called before volatile read or after volatile write (behaves like before volatile read even if in-between). But feel free to add the lock for extra safe, read more about acquire/release to understand what I just wrote.
                      – firda
                      yesterday










                      up vote
                      5
                      down vote













                      Why not wrap a Lazy<T> and then lazy load the Lazy<T> in your Get



                      public class MyLazy {
                      private object lazy;
                      private object _Lock = new object();

                      public T Get<T>(Func<T> factory) {
                      if (lazy == null) {
                      lock (_Lock) {
                      if (lazy == null) {
                      lazy = new Lazy<T>(factory);
                      }
                      }
                      }
                      return ((Lazy<T>)lazy).Value;
                      }
                      }


                      Taking advantage of existing features that have been tried and tested instead of trying to roll your own.






                      share|improve this answer



















                      • 12




                        I heard you like Lazy so I put some Lazy in your Lazy... ;)
                        – Pieter Witvoet
                        Nov 15 at 13:02








                      • 1




                        If you took the ConcurrentDictionary then you wouldn't need the lock - it has the very convenient GetOrAdd method ;-]
                        – t3chb0t
                        Nov 15 at 13:34












                      • @t3chb0t good point
                        – Nkosi
                        Nov 15 at 13:35






                      • 8




                        A static cache without an eviction policy is called a memory leak.
                        – Johnbot
                        Nov 15 at 13:42










                      • @Johnbot true. I'll revert that suggestion out for the time being.
                        – Nkosi
                        Nov 15 at 13:43















                      up vote
                      5
                      down vote













                      Why not wrap a Lazy<T> and then lazy load the Lazy<T> in your Get



                      public class MyLazy {
                      private object lazy;
                      private object _Lock = new object();

                      public T Get<T>(Func<T> factory) {
                      if (lazy == null) {
                      lock (_Lock) {
                      if (lazy == null) {
                      lazy = new Lazy<T>(factory);
                      }
                      }
                      }
                      return ((Lazy<T>)lazy).Value;
                      }
                      }


                      Taking advantage of existing features that have been tried and tested instead of trying to roll your own.






                      share|improve this answer



















                      • 12




                        I heard you like Lazy so I put some Lazy in your Lazy... ;)
                        – Pieter Witvoet
                        Nov 15 at 13:02








                      • 1




                        If you took the ConcurrentDictionary then you wouldn't need the lock - it has the very convenient GetOrAdd method ;-]
                        – t3chb0t
                        Nov 15 at 13:34












                      • @t3chb0t good point
                        – Nkosi
                        Nov 15 at 13:35






                      • 8




                        A static cache without an eviction policy is called a memory leak.
                        – Johnbot
                        Nov 15 at 13:42










                      • @Johnbot true. I'll revert that suggestion out for the time being.
                        – Nkosi
                        Nov 15 at 13:43













                      up vote
                      5
                      down vote










                      up vote
                      5
                      down vote









                      Why not wrap a Lazy<T> and then lazy load the Lazy<T> in your Get



                      public class MyLazy {
                      private object lazy;
                      private object _Lock = new object();

                      public T Get<T>(Func<T> factory) {
                      if (lazy == null) {
                      lock (_Lock) {
                      if (lazy == null) {
                      lazy = new Lazy<T>(factory);
                      }
                      }
                      }
                      return ((Lazy<T>)lazy).Value;
                      }
                      }


                      Taking advantage of existing features that have been tried and tested instead of trying to roll your own.






                      share|improve this answer














                      Why not wrap a Lazy<T> and then lazy load the Lazy<T> in your Get



                      public class MyLazy {
                      private object lazy;
                      private object _Lock = new object();

                      public T Get<T>(Func<T> factory) {
                      if (lazy == null) {
                      lock (_Lock) {
                      if (lazy == null) {
                      lazy = new Lazy<T>(factory);
                      }
                      }
                      }
                      return ((Lazy<T>)lazy).Value;
                      }
                      }


                      Taking advantage of existing features that have been tried and tested instead of trying to roll your own.







                      share|improve this answer














                      share|improve this answer



                      share|improve this answer








                      edited Nov 15 at 13:43

























                      answered Nov 15 at 12:50









                      Nkosi

                      2,098619




                      2,098619








                      • 12




                        I heard you like Lazy so I put some Lazy in your Lazy... ;)
                        – Pieter Witvoet
                        Nov 15 at 13:02








                      • 1




                        If you took the ConcurrentDictionary then you wouldn't need the lock - it has the very convenient GetOrAdd method ;-]
                        – t3chb0t
                        Nov 15 at 13:34












                      • @t3chb0t good point
                        – Nkosi
                        Nov 15 at 13:35






                      • 8




                        A static cache without an eviction policy is called a memory leak.
                        – Johnbot
                        Nov 15 at 13:42










                      • @Johnbot true. I'll revert that suggestion out for the time being.
                        – Nkosi
                        Nov 15 at 13:43














                      • 12




                        I heard you like Lazy so I put some Lazy in your Lazy... ;)
                        – Pieter Witvoet
                        Nov 15 at 13:02








                      • 1




                        If you took the ConcurrentDictionary then you wouldn't need the lock - it has the very convenient GetOrAdd method ;-]
                        – t3chb0t
                        Nov 15 at 13:34












                      • @t3chb0t good point
                        – Nkosi
                        Nov 15 at 13:35






                      • 8




                        A static cache without an eviction policy is called a memory leak.
                        – Johnbot
                        Nov 15 at 13:42










                      • @Johnbot true. I'll revert that suggestion out for the time being.
                        – Nkosi
                        Nov 15 at 13:43








                      12




                      12




                      I heard you like Lazy so I put some Lazy in your Lazy... ;)
                      – Pieter Witvoet
                      Nov 15 at 13:02






                      I heard you like Lazy so I put some Lazy in your Lazy... ;)
                      – Pieter Witvoet
                      Nov 15 at 13:02






                      1




                      1




                      If you took the ConcurrentDictionary then you wouldn't need the lock - it has the very convenient GetOrAdd method ;-]
                      – t3chb0t
                      Nov 15 at 13:34






                      If you took the ConcurrentDictionary then you wouldn't need the lock - it has the very convenient GetOrAdd method ;-]
                      – t3chb0t
                      Nov 15 at 13:34














                      @t3chb0t good point
                      – Nkosi
                      Nov 15 at 13:35




                      @t3chb0t good point
                      – Nkosi
                      Nov 15 at 13:35




                      8




                      8




                      A static cache without an eviction policy is called a memory leak.
                      – Johnbot
                      Nov 15 at 13:42




                      A static cache without an eviction policy is called a memory leak.
                      – Johnbot
                      Nov 15 at 13:42












                      @Johnbot true. I'll revert that suggestion out for the time being.
                      – Nkosi
                      Nov 15 at 13:43




                      @Johnbot true. I'll revert that suggestion out for the time being.
                      – Nkosi
                      Nov 15 at 13:43










                      up vote
                      3
                      down vote













                      With the feedback that (my brain) could understand I've came to this for now.




                      • (I think) I literally copied the locking structure of Lazy, thread-safe Singleton

                      • Included adding the volatile keyword for the _Loaded check

                      • Moved the generic definition to the class type. Adding a bit more boilerplate code on the advantage of more type safety and no-boxing

                      • Added a warning to remind myself there might be issues


                      As for the advice "Leave it to the smarter people". That's something I can't work with. I like to learn, I like other people to learn and I prefer a society where people are motivated to fail (against calculated cost) to learn for themselves.



                      I appreciate that everyone has a different opinion about that, that's okay.



                      I still not 100% sure if this solves at least the thread-safety problems of the first version, because the conversation went a bit off-topic imo. If anyone that is knowledgable can comment on that I would appreciate it. For the rest; I'm going to try to use this code and see what it does in production and if it causes (practical) problems for my caching of properties.



                      /// <summary>
                      /// Warning: might not be as performant (and safe?) as the Lazy<T>, see:
                      /// https://codereview.stackexchange.com/questions/207708/own-implementation-of-lazyt-object
                      /// </summary>
                      public class MyLazy<T>
                      {
                      private T _Value;
                      private volatile bool _Loaded;
                      private object _Lock = new object();


                      public T Get(Func<T> create)
                      {
                      if ( !_Loaded )
                      {
                      lock (_Lock)
                      {
                      if ( !_Loaded ) // double checked lock
                      {
                      _Value = create();
                      _Loaded = true;
                      }
                      }
                      }

                      return _Value;
                      }


                      public void Invalidate()
                      {
                      lock ( _Lock )
                      _Loaded = false;
                      }
                      }





                      share|improve this answer



















                      • 2




                        As for the advice "Leave it to the smarter people". That's something I can't work with. I fully support this point! If it's so easy to do it the wrong way then perhaps there should be some documentation on absolute DONT'S and which parts of the language are dangerous if used the wrong way. Without this question and Eric's feedback many people whould probably never hear about the time travel that is not explicitly forbidden by the C# specification so it's definitely worth spreading (and improving the docs).
                        – t3chb0t
                        Nov 16 at 11:16








                      • 2




                        I think Eric's advice wasn't to stop trying to experiment and learn about advanced concepts but rather to rely on battle-hardened correct implementations from the framework where you can. If you'd like to learn more about some of this, I'd recommend this: albahari.com/threading/…
                        – RobH
                        Nov 16 at 15:22






                      • 2




                        @t3chb0t: with cryptography, the most important piece of advice is: don't roll your own. Not because anyone thinks you shouldn't be allowed to learn, but because it's a deceptively difficult subject, where it's easy to think you've done things right, but exceedingly difficult to actually get it right. I think the same can be said for concurrency (which is what Eric just did). It's not easy to get this wrong because it's poorly documented, but because it's a very difficult subject.
                        – Pieter Witvoet
                        Nov 16 at 15:31








                      • 2




                        @Adriano No your argumentation doesn't work. The compiler may not access _Value before it has read _Loaded since _Loaded is now volatile which forbids that reordering (note that Eric talks about the original code which does not use volatile). The problem here is the Invalidate though which at the very least causes a race condition which might lead to reading a partially initialized value. But if you remove the Invalidate the current implementation should be fine as far as I can see.
                        – Voo
                        2 days ago








                      • 2




                        @Voo: volatile: Adding the volatile modifier ensures that all threads will observe volatile writes performed by any other thread in the order in which they were performed. It does not ensure anything about any other variable/field (and it can be cached because cache-rows are big enough to hold both _Value and fields from some object that happens to be at lower address, which you can access and make _Value loaded to cache).
                        – firda
                        2 days ago

















                      up vote
                      3
                      down vote













                      With the feedback that (my brain) could understand I've came to this for now.




                      • (I think) I literally copied the locking structure of Lazy, thread-safe Singleton

                      • Included adding the volatile keyword for the _Loaded check

                      • Moved the generic definition to the class type. Adding a bit more boilerplate code on the advantage of more type safety and no-boxing

                      • Added a warning to remind myself there might be issues


                      As for the advice "Leave it to the smarter people". That's something I can't work with. I like to learn, I like other people to learn and I prefer a society where people are motivated to fail (against calculated cost) to learn for themselves.



                      I appreciate that everyone has a different opinion about that, that's okay.



                      I still not 100% sure if this solves at least the thread-safety problems of the first version, because the conversation went a bit off-topic imo. If anyone that is knowledgable can comment on that I would appreciate it. For the rest; I'm going to try to use this code and see what it does in production and if it causes (practical) problems for my caching of properties.



                      /// <summary>
                      /// Warning: might not be as performant (and safe?) as the Lazy<T>, see:
                      /// https://codereview.stackexchange.com/questions/207708/own-implementation-of-lazyt-object
                      /// </summary>
                      public class MyLazy<T>
                      {
                      private T _Value;
                      private volatile bool _Loaded;
                      private object _Lock = new object();


                      public T Get(Func<T> create)
                      {
                      if ( !_Loaded )
                      {
                      lock (_Lock)
                      {
                      if ( !_Loaded ) // double checked lock
                      {
                      _Value = create();
                      _Loaded = true;
                      }
                      }
                      }

                      return _Value;
                      }


                      public void Invalidate()
                      {
                      lock ( _Lock )
                      _Loaded = false;
                      }
                      }





                      share|improve this answer



















                      • 2




                        As for the advice "Leave it to the smarter people". That's something I can't work with. I fully support this point! If it's so easy to do it the wrong way then perhaps there should be some documentation on absolute DONT'S and which parts of the language are dangerous if used the wrong way. Without this question and Eric's feedback many people whould probably never hear about the time travel that is not explicitly forbidden by the C# specification so it's definitely worth spreading (and improving the docs).
                        – t3chb0t
                        Nov 16 at 11:16








                      • 2




                        I think Eric's advice wasn't to stop trying to experiment and learn about advanced concepts but rather to rely on battle-hardened correct implementations from the framework where you can. If you'd like to learn more about some of this, I'd recommend this: albahari.com/threading/…
                        – RobH
                        Nov 16 at 15:22






                      • 2




                        @t3chb0t: with cryptography, the most important piece of advice is: don't roll your own. Not because anyone thinks you shouldn't be allowed to learn, but because it's a deceptively difficult subject, where it's easy to think you've done things right, but exceedingly difficult to actually get it right. I think the same can be said for concurrency (which is what Eric just did). It's not easy to get this wrong because it's poorly documented, but because it's a very difficult subject.
                        – Pieter Witvoet
                        Nov 16 at 15:31








                      • 2




                        @Adriano No your argumentation doesn't work. The compiler may not access _Value before it has read _Loaded since _Loaded is now volatile which forbids that reordering (note that Eric talks about the original code which does not use volatile). The problem here is the Invalidate though which at the very least causes a race condition which might lead to reading a partially initialized value. But if you remove the Invalidate the current implementation should be fine as far as I can see.
                        – Voo
                        2 days ago








                      • 2




                        @Voo: volatile: Adding the volatile modifier ensures that all threads will observe volatile writes performed by any other thread in the order in which they were performed. It does not ensure anything about any other variable/field (and it can be cached because cache-rows are big enough to hold both _Value and fields from some object that happens to be at lower address, which you can access and make _Value loaded to cache).
                        – firda
                        2 days ago















                      up vote
                      3
                      down vote










                      up vote
                      3
                      down vote









                      With the feedback that (my brain) could understand I've came to this for now.




                      • (I think) I literally copied the locking structure of Lazy, thread-safe Singleton

                      • Included adding the volatile keyword for the _Loaded check

                      • Moved the generic definition to the class type. Adding a bit more boilerplate code on the advantage of more type safety and no-boxing

                      • Added a warning to remind myself there might be issues


                      As for the advice "Leave it to the smarter people". That's something I can't work with. I like to learn, I like other people to learn and I prefer a society where people are motivated to fail (against calculated cost) to learn for themselves.



                      I appreciate that everyone has a different opinion about that, that's okay.



                      I still not 100% sure if this solves at least the thread-safety problems of the first version, because the conversation went a bit off-topic imo. If anyone that is knowledgable can comment on that I would appreciate it. For the rest; I'm going to try to use this code and see what it does in production and if it causes (practical) problems for my caching of properties.



                      /// <summary>
                      /// Warning: might not be as performant (and safe?) as the Lazy<T>, see:
                      /// https://codereview.stackexchange.com/questions/207708/own-implementation-of-lazyt-object
                      /// </summary>
                      public class MyLazy<T>
                      {
                      private T _Value;
                      private volatile bool _Loaded;
                      private object _Lock = new object();


                      public T Get(Func<T> create)
                      {
                      if ( !_Loaded )
                      {
                      lock (_Lock)
                      {
                      if ( !_Loaded ) // double checked lock
                      {
                      _Value = create();
                      _Loaded = true;
                      }
                      }
                      }

                      return _Value;
                      }


                      public void Invalidate()
                      {
                      lock ( _Lock )
                      _Loaded = false;
                      }
                      }





                      share|improve this answer














                      With the feedback that (my brain) could understand I've came to this for now.




                      • (I think) I literally copied the locking structure of Lazy, thread-safe Singleton

                      • Included adding the volatile keyword for the _Loaded check

                      • Moved the generic definition to the class type. Adding a bit more boilerplate code on the advantage of more type safety and no-boxing

                      • Added a warning to remind myself there might be issues


                      As for the advice "Leave it to the smarter people". That's something I can't work with. I like to learn, I like other people to learn and I prefer a society where people are motivated to fail (against calculated cost) to learn for themselves.



                      I appreciate that everyone has a different opinion about that, that's okay.



                      I still not 100% sure if this solves at least the thread-safety problems of the first version, because the conversation went a bit off-topic imo. If anyone that is knowledgable can comment on that I would appreciate it. For the rest; I'm going to try to use this code and see what it does in production and if it causes (practical) problems for my caching of properties.



                      /// <summary>
                      /// Warning: might not be as performant (and safe?) as the Lazy<T>, see:
                      /// https://codereview.stackexchange.com/questions/207708/own-implementation-of-lazyt-object
                      /// </summary>
                      public class MyLazy<T>
                      {
                      private T _Value;
                      private volatile bool _Loaded;
                      private object _Lock = new object();


                      public T Get(Func<T> create)
                      {
                      if ( !_Loaded )
                      {
                      lock (_Lock)
                      {
                      if ( !_Loaded ) // double checked lock
                      {
                      _Value = create();
                      _Loaded = true;
                      }
                      }
                      }

                      return _Value;
                      }


                      public void Invalidate()
                      {
                      lock ( _Lock )
                      _Loaded = false;
                      }
                      }






                      share|improve this answer














                      share|improve this answer



                      share|improve this answer








                      edited Nov 16 at 11:15

























                      answered Nov 16 at 11:10









                      Dirk Boer

                      356210




                      356210








                      • 2




                        As for the advice "Leave it to the smarter people". That's something I can't work with. I fully support this point! If it's so easy to do it the wrong way then perhaps there should be some documentation on absolute DONT'S and which parts of the language are dangerous if used the wrong way. Without this question and Eric's feedback many people whould probably never hear about the time travel that is not explicitly forbidden by the C# specification so it's definitely worth spreading (and improving the docs).
                        – t3chb0t
                        Nov 16 at 11:16








                      • 2




                        I think Eric's advice wasn't to stop trying to experiment and learn about advanced concepts but rather to rely on battle-hardened correct implementations from the framework where you can. If you'd like to learn more about some of this, I'd recommend this: albahari.com/threading/…
                        – RobH
                        Nov 16 at 15:22






                      • 2




                        @t3chb0t: with cryptography, the most important piece of advice is: don't roll your own. Not because anyone thinks you shouldn't be allowed to learn, but because it's a deceptively difficult subject, where it's easy to think you've done things right, but exceedingly difficult to actually get it right. I think the same can be said for concurrency (which is what Eric just did). It's not easy to get this wrong because it's poorly documented, but because it's a very difficult subject.
                        – Pieter Witvoet
                        Nov 16 at 15:31








                      • 2




                        @Adriano No your argumentation doesn't work. The compiler may not access _Value before it has read _Loaded since _Loaded is now volatile which forbids that reordering (note that Eric talks about the original code which does not use volatile). The problem here is the Invalidate though which at the very least causes a race condition which might lead to reading a partially initialized value. But if you remove the Invalidate the current implementation should be fine as far as I can see.
                        – Voo
                        2 days ago








                      • 2




                        @Voo: volatile: Adding the volatile modifier ensures that all threads will observe volatile writes performed by any other thread in the order in which they were performed. It does not ensure anything about any other variable/field (and it can be cached because cache-rows are big enough to hold both _Value and fields from some object that happens to be at lower address, which you can access and make _Value loaded to cache).
                        – firda
                        2 days ago
















                      • 2




                        As for the advice "Leave it to the smarter people". That's something I can't work with. I fully support this point! If it's so easy to do it the wrong way then perhaps there should be some documentation on absolute DONT'S and which parts of the language are dangerous if used the wrong way. Without this question and Eric's feedback many people whould probably never hear about the time travel that is not explicitly forbidden by the C# specification so it's definitely worth spreading (and improving the docs).
                        – t3chb0t
                        Nov 16 at 11:16








                      • 2




                        I think Eric's advice wasn't to stop trying to experiment and learn about advanced concepts but rather to rely on battle-hardened correct implementations from the framework where you can. If you'd like to learn more about some of this, I'd recommend this: albahari.com/threading/…
                        – RobH
                        Nov 16 at 15:22






                      • 2




                        @t3chb0t: with cryptography, the most important piece of advice is: don't roll your own. Not because anyone thinks you shouldn't be allowed to learn, but because it's a deceptively difficult subject, where it's easy to think you've done things right, but exceedingly difficult to actually get it right. I think the same can be said for concurrency (which is what Eric just did). It's not easy to get this wrong because it's poorly documented, but because it's a very difficult subject.
                        – Pieter Witvoet
                        Nov 16 at 15:31








                      • 2




                        @Adriano No your argumentation doesn't work. The compiler may not access _Value before it has read _Loaded since _Loaded is now volatile which forbids that reordering (note that Eric talks about the original code which does not use volatile). The problem here is the Invalidate though which at the very least causes a race condition which might lead to reading a partially initialized value. But if you remove the Invalidate the current implementation should be fine as far as I can see.
                        – Voo
                        2 days ago








                      • 2




                        @Voo: volatile: Adding the volatile modifier ensures that all threads will observe volatile writes performed by any other thread in the order in which they were performed. It does not ensure anything about any other variable/field (and it can be cached because cache-rows are big enough to hold both _Value and fields from some object that happens to be at lower address, which you can access and make _Value loaded to cache).
                        – firda
                        2 days ago










                      2




                      2




                      As for the advice "Leave it to the smarter people". That's something I can't work with. I fully support this point! If it's so easy to do it the wrong way then perhaps there should be some documentation on absolute DONT'S and which parts of the language are dangerous if used the wrong way. Without this question and Eric's feedback many people whould probably never hear about the time travel that is not explicitly forbidden by the C# specification so it's definitely worth spreading (and improving the docs).
                      – t3chb0t
                      Nov 16 at 11:16






                      As for the advice "Leave it to the smarter people". That's something I can't work with. I fully support this point! If it's so easy to do it the wrong way then perhaps there should be some documentation on absolute DONT'S and which parts of the language are dangerous if used the wrong way. Without this question and Eric's feedback many people whould probably never hear about the time travel that is not explicitly forbidden by the C# specification so it's definitely worth spreading (and improving the docs).
                      – t3chb0t
                      Nov 16 at 11:16






                      2




                      2




                      I think Eric's advice wasn't to stop trying to experiment and learn about advanced concepts but rather to rely on battle-hardened correct implementations from the framework where you can. If you'd like to learn more about some of this, I'd recommend this: albahari.com/threading/…
                      – RobH
                      Nov 16 at 15:22




                      I think Eric's advice wasn't to stop trying to experiment and learn about advanced concepts but rather to rely on battle-hardened correct implementations from the framework where you can. If you'd like to learn more about some of this, I'd recommend this: albahari.com/threading/…
                      – RobH
                      Nov 16 at 15:22




                      2




                      2




                      @t3chb0t: with cryptography, the most important piece of advice is: don't roll your own. Not because anyone thinks you shouldn't be allowed to learn, but because it's a deceptively difficult subject, where it's easy to think you've done things right, but exceedingly difficult to actually get it right. I think the same can be said for concurrency (which is what Eric just did). It's not easy to get this wrong because it's poorly documented, but because it's a very difficult subject.
                      – Pieter Witvoet
                      Nov 16 at 15:31






                      @t3chb0t: with cryptography, the most important piece of advice is: don't roll your own. Not because anyone thinks you shouldn't be allowed to learn, but because it's a deceptively difficult subject, where it's easy to think you've done things right, but exceedingly difficult to actually get it right. I think the same can be said for concurrency (which is what Eric just did). It's not easy to get this wrong because it's poorly documented, but because it's a very difficult subject.
                      – Pieter Witvoet
                      Nov 16 at 15:31






                      2




                      2




                      @Adriano No your argumentation doesn't work. The compiler may not access _Value before it has read _Loaded since _Loaded is now volatile which forbids that reordering (note that Eric talks about the original code which does not use volatile). The problem here is the Invalidate though which at the very least causes a race condition which might lead to reading a partially initialized value. But if you remove the Invalidate the current implementation should be fine as far as I can see.
                      – Voo
                      2 days ago






                      @Adriano No your argumentation doesn't work. The compiler may not access _Value before it has read _Loaded since _Loaded is now volatile which forbids that reordering (note that Eric talks about the original code which does not use volatile). The problem here is the Invalidate though which at the very least causes a race condition which might lead to reading a partially initialized value. But if you remove the Invalidate the current implementation should be fine as far as I can see.
                      – Voo
                      2 days ago






                      2




                      2




                      @Voo: volatile: Adding the volatile modifier ensures that all threads will observe volatile writes performed by any other thread in the order in which they were performed. It does not ensure anything about any other variable/field (and it can be cached because cache-rows are big enough to hold both _Value and fields from some object that happens to be at lower address, which you can access and make _Value loaded to cache).
                      – firda
                      2 days ago






                      @Voo: volatile: Adding the volatile modifier ensures that all threads will observe volatile writes performed by any other thread in the order in which they were performed. It does not ensure anything about any other variable/field (and it can be cached because cache-rows are big enough to hold both _Value and fields from some object that happens to be at lower address, which you can access and make _Value loaded to cache).
                      – firda
                      2 days ago












                      up vote
                      1
                      down vote













                      I've done the same analysis for .NET Core version of Lazy and stripped out the following that do not suit my needs anyway in this case (the new Lazy(Func<T>) constructor)




                      • Exception caching

                      • all the other modes then ExecutionAndPublication

                      • comments

                      • Contract.Assert's

                      • any other things like Debugger visualization, ToString() etc

                      • Inlined some functions that stopped having any logic because of stripping the other things

                      • I've reversed the order of methods to make it read more chronological


                      Questions/notes that I have:





                      • would anything change if _state will be changed from an object to a boolean in this stripped version? (in the original version it is used for alternative paths, like different publication modes or a cached exception) never mind - the state object is also being used as a lock. I expect that we can change it for object though? saving the seperate class.

                      • is it allowed to inline some other methods, like ViaFactory?

                      • it feels to me a bit strange that Value being called recursively the first time but that might have to do with other code that I stripped - i.e. the Exception caching.


                      ..



                      internal class LazyHelper
                      {
                      internal LazyHelper()
                      {
                      }
                      }


                      public class Lazy<T>
                      {
                      private volatile LazyHelper _state;
                      private Func<T> _factory;
                      private T _value;


                      public Lazy(Func<T> valueFactory)
                      {
                      _factory = valueFactory;
                      _state = new LazyHelper();
                      }

                      public T Value => _state == null ? _value : CreateValue();


                      private T CreateValue()
                      {
                      var state = _state;
                      if (state != null)
                      {
                      ExecutionAndPublication(state);
                      }
                      return Value;
                      }


                      private void ExecutionAndPublication(LazyHelper executionAndPublication)
                      {
                      lock (executionAndPublication)
                      {
                      ViaFactory();
                      }
                      }


                      private void ViaFactory()
                      {
                      _value = _factory();
                      _state = null; // volatile write, must occur after setting _value
                      }
                      }


                      Not sure if it is allowed, but if I even inline more you really get a 'normal' double checked lock with a volatile. The main difference is that the locker object is also being used as a _loaded flag.



                      public class Lazy<T>
                      {
                      private volatile object _state;
                      private Func<T> _factory;
                      private T _value;


                      public Lazy(Func<T> valueFactory)
                      {
                      _factory = valueFactory;
                      _state = new object();
                      }

                      public T Value => _state == null ? _value : CreateValue();


                      private T CreateValue()
                      {
                      var state = _state;
                      if (state != null)
                      {
                      lock (state)
                      {
                      if (ReferenceEquals(_state, state)) // seems to act as the second check
                      {
                      _value = _factory();
                      _state = null;
                      }
                      }
                      }
                      return Value;
                      }
                      }





                      share|improve this answer























                      • 1. Remember there is not Invalidate, what I gave you has it. 2. volatile keywords may work the same as Volatile.Read and Volatile.Write, but the documentation for the former is better and it can also be converted to Visual Basic, so, I rather used the methods instead of the keyword. 3. .NET Core appears to free the lock after initialization (again: no Invalidate) and uses acquire/release semantics which are fences (no reordering past the point, but find that for your self for full explanation).
                        – firda
                        yesterday










                      • T Value => _state ... has acquire semantics (Volatile.Read) which means that it cannot see the null until _state = null is performed in CreateValue with release semanics (Volatile.Write) and that no read can be reordered before the volatile read (value cannot be cached) and no write can be reordered after the volatile write (cannot be partially constructed).
                        – firda
                        yesterday















                      up vote
                      1
                      down vote













                      I've done the same analysis for .NET Core version of Lazy and stripped out the following that do not suit my needs anyway in this case (the new Lazy(Func<T>) constructor)




                      • Exception caching

                      • all the other modes then ExecutionAndPublication

                      • comments

                      • Contract.Assert's

                      • any other things like Debugger visualization, ToString() etc

                      • Inlined some functions that stopped having any logic because of stripping the other things

                      • I've reversed the order of methods to make it read more chronological


                      Questions/notes that I have:





                      • would anything change if _state will be changed from an object to a boolean in this stripped version? (in the original version it is used for alternative paths, like different publication modes or a cached exception) never mind - the state object is also being used as a lock. I expect that we can change it for object though? saving the seperate class.

                      • is it allowed to inline some other methods, like ViaFactory?

                      • it feels to me a bit strange that Value being called recursively the first time but that might have to do with other code that I stripped - i.e. the Exception caching.


                      ..



                      internal class LazyHelper
                      {
                      internal LazyHelper()
                      {
                      }
                      }


                      public class Lazy<T>
                      {
                      private volatile LazyHelper _state;
                      private Func<T> _factory;
                      private T _value;


                      public Lazy(Func<T> valueFactory)
                      {
                      _factory = valueFactory;
                      _state = new LazyHelper();
                      }

                      public T Value => _state == null ? _value : CreateValue();


                      private T CreateValue()
                      {
                      var state = _state;
                      if (state != null)
                      {
                      ExecutionAndPublication(state);
                      }
                      return Value;
                      }


                      private void ExecutionAndPublication(LazyHelper executionAndPublication)
                      {
                      lock (executionAndPublication)
                      {
                      ViaFactory();
                      }
                      }


                      private void ViaFactory()
                      {
                      _value = _factory();
                      _state = null; // volatile write, must occur after setting _value
                      }
                      }


                      Not sure if it is allowed, but if I even inline more you really get a 'normal' double checked lock with a volatile. The main difference is that the locker object is also being used as a _loaded flag.



                      public class Lazy<T>
                      {
                      private volatile object _state;
                      private Func<T> _factory;
                      private T _value;


                      public Lazy(Func<T> valueFactory)
                      {
                      _factory = valueFactory;
                      _state = new object();
                      }

                      public T Value => _state == null ? _value : CreateValue();


                      private T CreateValue()
                      {
                      var state = _state;
                      if (state != null)
                      {
                      lock (state)
                      {
                      if (ReferenceEquals(_state, state)) // seems to act as the second check
                      {
                      _value = _factory();
                      _state = null;
                      }
                      }
                      }
                      return Value;
                      }
                      }





                      share|improve this answer























                      • 1. Remember there is not Invalidate, what I gave you has it. 2. volatile keywords may work the same as Volatile.Read and Volatile.Write, but the documentation for the former is better and it can also be converted to Visual Basic, so, I rather used the methods instead of the keyword. 3. .NET Core appears to free the lock after initialization (again: no Invalidate) and uses acquire/release semantics which are fences (no reordering past the point, but find that for your self for full explanation).
                        – firda
                        yesterday










                      • T Value => _state ... has acquire semantics (Volatile.Read) which means that it cannot see the null until _state = null is performed in CreateValue with release semanics (Volatile.Write) and that no read can be reordered before the volatile read (value cannot be cached) and no write can be reordered after the volatile write (cannot be partially constructed).
                        – firda
                        yesterday













                      up vote
                      1
                      down vote










                      up vote
                      1
                      down vote









                      I've done the same analysis for .NET Core version of Lazy and stripped out the following that do not suit my needs anyway in this case (the new Lazy(Func<T>) constructor)




                      • Exception caching

                      • all the other modes then ExecutionAndPublication

                      • comments

                      • Contract.Assert's

                      • any other things like Debugger visualization, ToString() etc

                      • Inlined some functions that stopped having any logic because of stripping the other things

                      • I've reversed the order of methods to make it read more chronological


                      Questions/notes that I have:





                      • would anything change if _state will be changed from an object to a boolean in this stripped version? (in the original version it is used for alternative paths, like different publication modes or a cached exception) never mind - the state object is also being used as a lock. I expect that we can change it for object though? saving the seperate class.

                      • is it allowed to inline some other methods, like ViaFactory?

                      • it feels to me a bit strange that Value being called recursively the first time but that might have to do with other code that I stripped - i.e. the Exception caching.


                      ..



                      internal class LazyHelper
                      {
                      internal LazyHelper()
                      {
                      }
                      }


                      public class Lazy<T>
                      {
                      private volatile LazyHelper _state;
                      private Func<T> _factory;
                      private T _value;


                      public Lazy(Func<T> valueFactory)
                      {
                      _factory = valueFactory;
                      _state = new LazyHelper();
                      }

                      public T Value => _state == null ? _value : CreateValue();


                      private T CreateValue()
                      {
                      var state = _state;
                      if (state != null)
                      {
                      ExecutionAndPublication(state);
                      }
                      return Value;
                      }


                      private void ExecutionAndPublication(LazyHelper executionAndPublication)
                      {
                      lock (executionAndPublication)
                      {
                      ViaFactory();
                      }
                      }


                      private void ViaFactory()
                      {
                      _value = _factory();
                      _state = null; // volatile write, must occur after setting _value
                      }
                      }


                      Not sure if it is allowed, but if I even inline more you really get a 'normal' double checked lock with a volatile. The main difference is that the locker object is also being used as a _loaded flag.



                      public class Lazy<T>
                      {
                      private volatile object _state;
                      private Func<T> _factory;
                      private T _value;


                      public Lazy(Func<T> valueFactory)
                      {
                      _factory = valueFactory;
                      _state = new object();
                      }

                      public T Value => _state == null ? _value : CreateValue();


                      private T CreateValue()
                      {
                      var state = _state;
                      if (state != null)
                      {
                      lock (state)
                      {
                      if (ReferenceEquals(_state, state)) // seems to act as the second check
                      {
                      _value = _factory();
                      _state = null;
                      }
                      }
                      }
                      return Value;
                      }
                      }





                      share|improve this answer














                      I've done the same analysis for .NET Core version of Lazy and stripped out the following that do not suit my needs anyway in this case (the new Lazy(Func<T>) constructor)




                      • Exception caching

                      • all the other modes then ExecutionAndPublication

                      • comments

                      • Contract.Assert's

                      • any other things like Debugger visualization, ToString() etc

                      • Inlined some functions that stopped having any logic because of stripping the other things

                      • I've reversed the order of methods to make it read more chronological


                      Questions/notes that I have:





                      • would anything change if _state will be changed from an object to a boolean in this stripped version? (in the original version it is used for alternative paths, like different publication modes or a cached exception) never mind - the state object is also being used as a lock. I expect that we can change it for object though? saving the seperate class.

                      • is it allowed to inline some other methods, like ViaFactory?

                      • it feels to me a bit strange that Value being called recursively the first time but that might have to do with other code that I stripped - i.e. the Exception caching.


                      ..



                      internal class LazyHelper
                      {
                      internal LazyHelper()
                      {
                      }
                      }


                      public class Lazy<T>
                      {
                      private volatile LazyHelper _state;
                      private Func<T> _factory;
                      private T _value;


                      public Lazy(Func<T> valueFactory)
                      {
                      _factory = valueFactory;
                      _state = new LazyHelper();
                      }

                      public T Value => _state == null ? _value : CreateValue();


                      private T CreateValue()
                      {
                      var state = _state;
                      if (state != null)
                      {
                      ExecutionAndPublication(state);
                      }
                      return Value;
                      }


                      private void ExecutionAndPublication(LazyHelper executionAndPublication)
                      {
                      lock (executionAndPublication)
                      {
                      ViaFactory();
                      }
                      }


                      private void ViaFactory()
                      {
                      _value = _factory();
                      _state = null; // volatile write, must occur after setting _value
                      }
                      }


                      Not sure if it is allowed, but if I even inline more you really get a 'normal' double checked lock with a volatile. The main difference is that the locker object is also being used as a _loaded flag.



                      public class Lazy<T>
                      {
                      private volatile object _state;
                      private Func<T> _factory;
                      private T _value;


                      public Lazy(Func<T> valueFactory)
                      {
                      _factory = valueFactory;
                      _state = new object();
                      }

                      public T Value => _state == null ? _value : CreateValue();


                      private T CreateValue()
                      {
                      var state = _state;
                      if (state != null)
                      {
                      lock (state)
                      {
                      if (ReferenceEquals(_state, state)) // seems to act as the second check
                      {
                      _value = _factory();
                      _state = null;
                      }
                      }
                      }
                      return Value;
                      }
                      }






                      share|improve this answer














                      share|improve this answer



                      share|improve this answer








                      edited yesterday

























                      answered yesterday









                      Dirk Boer

                      356210




                      356210












                      • 1. Remember there is not Invalidate, what I gave you has it. 2. volatile keywords may work the same as Volatile.Read and Volatile.Write, but the documentation for the former is better and it can also be converted to Visual Basic, so, I rather used the methods instead of the keyword. 3. .NET Core appears to free the lock after initialization (again: no Invalidate) and uses acquire/release semantics which are fences (no reordering past the point, but find that for your self for full explanation).
                        – firda
                        yesterday










                      • T Value => _state ... has acquire semantics (Volatile.Read) which means that it cannot see the null until _state = null is performed in CreateValue with release semanics (Volatile.Write) and that no read can be reordered before the volatile read (value cannot be cached) and no write can be reordered after the volatile write (cannot be partially constructed).
                        – firda
                        yesterday


















                      • 1. Remember there is not Invalidate, what I gave you has it. 2. volatile keywords may work the same as Volatile.Read and Volatile.Write, but the documentation for the former is better and it can also be converted to Visual Basic, so, I rather used the methods instead of the keyword. 3. .NET Core appears to free the lock after initialization (again: no Invalidate) and uses acquire/release semantics which are fences (no reordering past the point, but find that for your self for full explanation).
                        – firda
                        yesterday










                      • T Value => _state ... has acquire semantics (Volatile.Read) which means that it cannot see the null until _state = null is performed in CreateValue with release semanics (Volatile.Write) and that no read can be reordered before the volatile read (value cannot be cached) and no write can be reordered after the volatile write (cannot be partially constructed).
                        – firda
                        yesterday
















                      1. Remember there is not Invalidate, what I gave you has it. 2. volatile keywords may work the same as Volatile.Read and Volatile.Write, but the documentation for the former is better and it can also be converted to Visual Basic, so, I rather used the methods instead of the keyword. 3. .NET Core appears to free the lock after initialization (again: no Invalidate) and uses acquire/release semantics which are fences (no reordering past the point, but find that for your self for full explanation).
                      – firda
                      yesterday




                      1. Remember there is not Invalidate, what I gave you has it. 2. volatile keywords may work the same as Volatile.Read and Volatile.Write, but the documentation for the former is better and it can also be converted to Visual Basic, so, I rather used the methods instead of the keyword. 3. .NET Core appears to free the lock after initialization (again: no Invalidate) and uses acquire/release semantics which are fences (no reordering past the point, but find that for your self for full explanation).
                      – firda
                      yesterday












                      T Value => _state ... has acquire semantics (Volatile.Read) which means that it cannot see the null until _state = null is performed in CreateValue with release semanics (Volatile.Write) and that no read can be reordered before the volatile read (value cannot be cached) and no write can be reordered after the volatile write (cannot be partially constructed).
                      – firda
                      yesterday




                      T Value => _state ... has acquire semantics (Volatile.Read) which means that it cannot see the null until _state = null is performed in CreateValue with release semanics (Volatile.Write) and that no read can be reordered before the volatile read (value cannot be cached) and no write can be reordered after the volatile write (cannot be partially constructed).
                      – firda
                      yesterday










                      up vote
                      0
                      down vote













                      I thought when I asked this question it was going to be about the passing of the delegate, but it is the double checked lock that seems to quite difficult to grasp.



                      Not stopping my curiousity I took the .NET source of Lazy and stripped out the following that do not suit my needs anyway in this case:




                      • Exception caching

                      • all the other modes then ExecutionAndPublication

                      • comments

                      • Contract.Assert's

                      • any other things like Debugger visualization, ToString() etc

                      • Inlined some functions that stopped having any logic because of stripping the other things


                      This is the result until now of the stripped Lazy. Be aware that I tried to do my best in not changing anything that might change behaviour, and hopefully didn't. If anyone else has updates that would be more then welcome.



                      Note that I did this for study and learning and not to put it in production.



                      Some questions that I still have to further simplify:





                      • can in this case the try/finally and Monitor.Enter/Exit and the lockTaken be simplified to a lock {} statement? If I follow Eric's answer here that would be a yes, but maybe I'm overlooking something: https://stackoverflow.com/a/2837224/647845 Probably not, because the if statement.

                      • can the sentinel be replaced by a boolean and a null assignment for the factory?

                      • when you assume the logic is good, can the whole CreateValue() be replaced by an inline new Boxed(factory())?


                      -



                      #if !FEATURE_CORECLR
                      [HostProtection(Synchronization = true, ExternalThreading = true)]
                      #endif
                      public class Lazy2<T>
                      {
                      class Boxed
                      {
                      internal Boxed(T value)
                      {
                      m_value = value;
                      }
                      internal T m_value;
                      }


                      static readonly Func<T> ALREADY_INVOKED_SENTINEL = delegate
                      {
                      return default(T);
                      };

                      private Boxed m_boxed;
                      private Func<T> m_valueFactory;
                      private object m_threadSafeObj;


                      public Lazy2(Func<T> valueFactory)
                      {
                      m_threadSafeObj = new object();
                      m_valueFactory = valueFactory;
                      }


                      public T Value
                      {
                      get
                      {
                      Boxed boxed = null;
                      if (m_boxed != null )
                      {
                      boxed = m_boxed;
                      if (boxed != null)
                      {
                      return boxed.m_value;
                      }
                      }

                      return LazyInitValue();
                      }
                      }

                      private T LazyInitValue()
                      {
                      Boxed boxed = null;

                      object threadSafeObj = Volatile.Read(ref m_threadSafeObj);
                      bool lockTaken = false;

                      try
                      {
                      if (threadSafeObj != (object)ALREADY_INVOKED_SENTINEL)
                      Monitor.Enter(threadSafeObj, ref lockTaken);

                      if (m_boxed == null)
                      {
                      boxed = CreateValue();
                      m_boxed = boxed;
                      Volatile.Write(ref m_threadSafeObj, ALREADY_INVOKED_SENTINEL);
                      }
                      boxed = m_boxed;
                      }
                      finally
                      {
                      if (lockTaken)
                      Monitor.Exit(threadSafeObj);
                      }

                      return boxed.m_value;
                      }

                      private Boxed CreateValue()
                      {
                      Boxed boxed = null;

                      Func<T> factory = m_valueFactory;
                      m_valueFactory = ALREADY_INVOKED_SENTINEL;

                      boxed = new Boxed(factory());

                      return boxed;
                      }
                      }





                      share|improve this answer























                      • Do you still plan to add Invalidate() => Volatile.Write(ref m_boxed, null)? I would rather use Boxed boxed = Volatile.Read(ref m_boxed), make m_threadSafeObj readonly (use lock and remove the sentinel). But remember I am not an expert, just interested ;)
                        – firda
                        2 days ago












                      • Hi @firda, nothing planned yet :) just was interested to break everything down to the core. For my own version it would be useful if I have an Invalidate(), but I first need to have better understanding of all the moving parts.
                        – Dirk Boer
                        2 days ago










                      • This is it. ONE volatile variable, not two.
                        – Adriano Repetti
                        2 days ago








                      • 1




                        @Voo: There is no volatile in Reference Soruce either and there is no Invalidate in this version, so, the logic appears to be: 1. m_boxed can never be assigned until the Boxed is fully constructed (it is a class, not a struct) and 2. even if the effect is not immediately visible to all threads, they will just take the lock to find out that it was already created.
                        – firda
                        yesterday






                      • 2




                        It would be better to post a follow-up question instead of multiple versions as answers with questions in it.
                        – Heslacher
                        9 hours ago















                      up vote
                      0
                      down vote













                      I thought when I asked this question it was going to be about the passing of the delegate, but it is the double checked lock that seems to quite difficult to grasp.



                      Not stopping my curiousity I took the .NET source of Lazy and stripped out the following that do not suit my needs anyway in this case:




                      • Exception caching

                      • all the other modes then ExecutionAndPublication

                      • comments

                      • Contract.Assert's

                      • any other things like Debugger visualization, ToString() etc

                      • Inlined some functions that stopped having any logic because of stripping the other things


                      This is the result until now of the stripped Lazy. Be aware that I tried to do my best in not changing anything that might change behaviour, and hopefully didn't. If anyone else has updates that would be more then welcome.



                      Note that I did this for study and learning and not to put it in production.



                      Some questions that I still have to further simplify:





                      • can in this case the try/finally and Monitor.Enter/Exit and the lockTaken be simplified to a lock {} statement? If I follow Eric's answer here that would be a yes, but maybe I'm overlooking something: https://stackoverflow.com/a/2837224/647845 Probably not, because the if statement.

                      • can the sentinel be replaced by a boolean and a null assignment for the factory?

                      • when you assume the logic is good, can the whole CreateValue() be replaced by an inline new Boxed(factory())?


                      -



                      #if !FEATURE_CORECLR
                      [HostProtection(Synchronization = true, ExternalThreading = true)]
                      #endif
                      public class Lazy2<T>
                      {
                      class Boxed
                      {
                      internal Boxed(T value)
                      {
                      m_value = value;
                      }
                      internal T m_value;
                      }


                      static readonly Func<T> ALREADY_INVOKED_SENTINEL = delegate
                      {
                      return default(T);
                      };

                      private Boxed m_boxed;
                      private Func<T> m_valueFactory;
                      private object m_threadSafeObj;


                      public Lazy2(Func<T> valueFactory)
                      {
                      m_threadSafeObj = new object();
                      m_valueFactory = valueFactory;
                      }


                      public T Value
                      {
                      get
                      {
                      Boxed boxed = null;
                      if (m_boxed != null )
                      {
                      boxed = m_boxed;
                      if (boxed != null)
                      {
                      return boxed.m_value;
                      }
                      }

                      return LazyInitValue();
                      }
                      }

                      private T LazyInitValue()
                      {
                      Boxed boxed = null;

                      object threadSafeObj = Volatile.Read(ref m_threadSafeObj);
                      bool lockTaken = false;

                      try
                      {
                      if (threadSafeObj != (object)ALREADY_INVOKED_SENTINEL)
                      Monitor.Enter(threadSafeObj, ref lockTaken);

                      if (m_boxed == null)
                      {
                      boxed = CreateValue();
                      m_boxed = boxed;
                      Volatile.Write(ref m_threadSafeObj, ALREADY_INVOKED_SENTINEL);
                      }
                      boxed = m_boxed;
                      }
                      finally
                      {
                      if (lockTaken)
                      Monitor.Exit(threadSafeObj);
                      }

                      return boxed.m_value;
                      }

                      private Boxed CreateValue()
                      {
                      Boxed boxed = null;

                      Func<T> factory = m_valueFactory;
                      m_valueFactory = ALREADY_INVOKED_SENTINEL;

                      boxed = new Boxed(factory());

                      return boxed;
                      }
                      }





                      share|improve this answer























                      • Do you still plan to add Invalidate() => Volatile.Write(ref m_boxed, null)? I would rather use Boxed boxed = Volatile.Read(ref m_boxed), make m_threadSafeObj readonly (use lock and remove the sentinel). But remember I am not an expert, just interested ;)
                        – firda
                        2 days ago












                      • Hi @firda, nothing planned yet :) just was interested to break everything down to the core. For my own version it would be useful if I have an Invalidate(), but I first need to have better understanding of all the moving parts.
                        – Dirk Boer
                        2 days ago










                      • This is it. ONE volatile variable, not two.
                        – Adriano Repetti
                        2 days ago








                      • 1




                        @Voo: There is no volatile in Reference Soruce either and there is no Invalidate in this version, so, the logic appears to be: 1. m_boxed can never be assigned until the Boxed is fully constructed (it is a class, not a struct) and 2. even if the effect is not immediately visible to all threads, they will just take the lock to find out that it was already created.
                        – firda
                        yesterday






                      • 2




                        It would be better to post a follow-up question instead of multiple versions as answers with questions in it.
                        – Heslacher
                        9 hours ago













                      up vote
                      0
                      down vote










                      up vote
                      0
                      down vote









                      I thought when I asked this question it was going to be about the passing of the delegate, but it is the double checked lock that seems to quite difficult to grasp.



                      Not stopping my curiousity I took the .NET source of Lazy and stripped out the following that do not suit my needs anyway in this case:




                      • Exception caching

                      • all the other modes then ExecutionAndPublication

                      • comments

                      • Contract.Assert's

                      • any other things like Debugger visualization, ToString() etc

                      • Inlined some functions that stopped having any logic because of stripping the other things


                      This is the result until now of the stripped Lazy. Be aware that I tried to do my best in not changing anything that might change behaviour, and hopefully didn't. If anyone else has updates that would be more then welcome.



                      Note that I did this for study and learning and not to put it in production.



                      Some questions that I still have to further simplify:





                      • can in this case the try/finally and Monitor.Enter/Exit and the lockTaken be simplified to a lock {} statement? If I follow Eric's answer here that would be a yes, but maybe I'm overlooking something: https://stackoverflow.com/a/2837224/647845 Probably not, because the if statement.

                      • can the sentinel be replaced by a boolean and a null assignment for the factory?

                      • when you assume the logic is good, can the whole CreateValue() be replaced by an inline new Boxed(factory())?


                      -



                      #if !FEATURE_CORECLR
                      [HostProtection(Synchronization = true, ExternalThreading = true)]
                      #endif
                      public class Lazy2<T>
                      {
                      class Boxed
                      {
                      internal Boxed(T value)
                      {
                      m_value = value;
                      }
                      internal T m_value;
                      }


                      static readonly Func<T> ALREADY_INVOKED_SENTINEL = delegate
                      {
                      return default(T);
                      };

                      private Boxed m_boxed;
                      private Func<T> m_valueFactory;
                      private object m_threadSafeObj;


                      public Lazy2(Func<T> valueFactory)
                      {
                      m_threadSafeObj = new object();
                      m_valueFactory = valueFactory;
                      }


                      public T Value
                      {
                      get
                      {
                      Boxed boxed = null;
                      if (m_boxed != null )
                      {
                      boxed = m_boxed;
                      if (boxed != null)
                      {
                      return boxed.m_value;
                      }
                      }

                      return LazyInitValue();
                      }
                      }

                      private T LazyInitValue()
                      {
                      Boxed boxed = null;

                      object threadSafeObj = Volatile.Read(ref m_threadSafeObj);
                      bool lockTaken = false;

                      try
                      {
                      if (threadSafeObj != (object)ALREADY_INVOKED_SENTINEL)
                      Monitor.Enter(threadSafeObj, ref lockTaken);

                      if (m_boxed == null)
                      {
                      boxed = CreateValue();
                      m_boxed = boxed;
                      Volatile.Write(ref m_threadSafeObj, ALREADY_INVOKED_SENTINEL);
                      }
                      boxed = m_boxed;
                      }
                      finally
                      {
                      if (lockTaken)
                      Monitor.Exit(threadSafeObj);
                      }

                      return boxed.m_value;
                      }

                      private Boxed CreateValue()
                      {
                      Boxed boxed = null;

                      Func<T> factory = m_valueFactory;
                      m_valueFactory = ALREADY_INVOKED_SENTINEL;

                      boxed = new Boxed(factory());

                      return boxed;
                      }
                      }





                      share|improve this answer














                      I thought when I asked this question it was going to be about the passing of the delegate, but it is the double checked lock that seems to quite difficult to grasp.



                      Not stopping my curiousity I took the .NET source of Lazy and stripped out the following that do not suit my needs anyway in this case:




                      • Exception caching

                      • all the other modes then ExecutionAndPublication

                      • comments

                      • Contract.Assert's

                      • any other things like Debugger visualization, ToString() etc

                      • Inlined some functions that stopped having any logic because of stripping the other things


                      This is the result until now of the stripped Lazy. Be aware that I tried to do my best in not changing anything that might change behaviour, and hopefully didn't. If anyone else has updates that would be more then welcome.



                      Note that I did this for study and learning and not to put it in production.



                      Some questions that I still have to further simplify:





                      • can in this case the try/finally and Monitor.Enter/Exit and the lockTaken be simplified to a lock {} statement? If I follow Eric's answer here that would be a yes, but maybe I'm overlooking something: https://stackoverflow.com/a/2837224/647845 Probably not, because the if statement.

                      • can the sentinel be replaced by a boolean and a null assignment for the factory?

                      • when you assume the logic is good, can the whole CreateValue() be replaced by an inline new Boxed(factory())?


                      -



                      #if !FEATURE_CORECLR
                      [HostProtection(Synchronization = true, ExternalThreading = true)]
                      #endif
                      public class Lazy2<T>
                      {
                      class Boxed
                      {
                      internal Boxed(T value)
                      {
                      m_value = value;
                      }
                      internal T m_value;
                      }


                      static readonly Func<T> ALREADY_INVOKED_SENTINEL = delegate
                      {
                      return default(T);
                      };

                      private Boxed m_boxed;
                      private Func<T> m_valueFactory;
                      private object m_threadSafeObj;


                      public Lazy2(Func<T> valueFactory)
                      {
                      m_threadSafeObj = new object();
                      m_valueFactory = valueFactory;
                      }


                      public T Value
                      {
                      get
                      {
                      Boxed boxed = null;
                      if (m_boxed != null )
                      {
                      boxed = m_boxed;
                      if (boxed != null)
                      {
                      return boxed.m_value;
                      }
                      }

                      return LazyInitValue();
                      }
                      }

                      private T LazyInitValue()
                      {
                      Boxed boxed = null;

                      object threadSafeObj = Volatile.Read(ref m_threadSafeObj);
                      bool lockTaken = false;

                      try
                      {
                      if (threadSafeObj != (object)ALREADY_INVOKED_SENTINEL)
                      Monitor.Enter(threadSafeObj, ref lockTaken);

                      if (m_boxed == null)
                      {
                      boxed = CreateValue();
                      m_boxed = boxed;
                      Volatile.Write(ref m_threadSafeObj, ALREADY_INVOKED_SENTINEL);
                      }
                      boxed = m_boxed;
                      }
                      finally
                      {
                      if (lockTaken)
                      Monitor.Exit(threadSafeObj);
                      }

                      return boxed.m_value;
                      }

                      private Boxed CreateValue()
                      {
                      Boxed boxed = null;

                      Func<T> factory = m_valueFactory;
                      m_valueFactory = ALREADY_INVOKED_SENTINEL;

                      boxed = new Boxed(factory());

                      return boxed;
                      }
                      }






                      share|improve this answer














                      share|improve this answer



                      share|improve this answer








                      edited 2 days ago

























                      answered 2 days ago









                      Dirk Boer

                      356210




                      356210












                      • Do you still plan to add Invalidate() => Volatile.Write(ref m_boxed, null)? I would rather use Boxed boxed = Volatile.Read(ref m_boxed), make m_threadSafeObj readonly (use lock and remove the sentinel). But remember I am not an expert, just interested ;)
                        – firda
                        2 days ago












                      • Hi @firda, nothing planned yet :) just was interested to break everything down to the core. For my own version it would be useful if I have an Invalidate(), but I first need to have better understanding of all the moving parts.
                        – Dirk Boer
                        2 days ago










                      • This is it. ONE volatile variable, not two.
                        – Adriano Repetti
                        2 days ago








                      • 1




                        @Voo: There is no volatile in Reference Soruce either and there is no Invalidate in this version, so, the logic appears to be: 1. m_boxed can never be assigned until the Boxed is fully constructed (it is a class, not a struct) and 2. even if the effect is not immediately visible to all threads, they will just take the lock to find out that it was already created.
                        – firda
                        yesterday






                      • 2




                        It would be better to post a follow-up question instead of multiple versions as answers with questions in it.
                        – Heslacher
                        9 hours ago


















                      • Do you still plan to add Invalidate() => Volatile.Write(ref m_boxed, null)? I would rather use Boxed boxed = Volatile.Read(ref m_boxed), make m_threadSafeObj readonly (use lock and remove the sentinel). But remember I am not an expert, just interested ;)
                        – firda
                        2 days ago












                      • Hi @firda, nothing planned yet :) just was interested to break everything down to the core. For my own version it would be useful if I have an Invalidate(), but I first need to have better understanding of all the moving parts.
                        – Dirk Boer
                        2 days ago










                      • This is it. ONE volatile variable, not two.
                        – Adriano Repetti
                        2 days ago








                      • 1




                        @Voo: There is no volatile in Reference Soruce either and there is no Invalidate in this version, so, the logic appears to be: 1. m_boxed can never be assigned until the Boxed is fully constructed (it is a class, not a struct) and 2. even if the effect is not immediately visible to all threads, they will just take the lock to find out that it was already created.
                        – firda
                        yesterday






                      • 2




                        It would be better to post a follow-up question instead of multiple versions as answers with questions in it.
                        – Heslacher
                        9 hours ago
















                      Do you still plan to add Invalidate() => Volatile.Write(ref m_boxed, null)? I would rather use Boxed boxed = Volatile.Read(ref m_boxed), make m_threadSafeObj readonly (use lock and remove the sentinel). But remember I am not an expert, just interested ;)
                      – firda
                      2 days ago






                      Do you still plan to add Invalidate() => Volatile.Write(ref m_boxed, null)? I would rather use Boxed boxed = Volatile.Read(ref m_boxed), make m_threadSafeObj readonly (use lock and remove the sentinel). But remember I am not an expert, just interested ;)
                      – firda
                      2 days ago














                      Hi @firda, nothing planned yet :) just was interested to break everything down to the core. For my own version it would be useful if I have an Invalidate(), but I first need to have better understanding of all the moving parts.
                      – Dirk Boer
                      2 days ago




                      Hi @firda, nothing planned yet :) just was interested to break everything down to the core. For my own version it would be useful if I have an Invalidate(), but I first need to have better understanding of all the moving parts.
                      – Dirk Boer
                      2 days ago












                      This is it. ONE volatile variable, not two.
                      – Adriano Repetti
                      2 days ago






                      This is it. ONE volatile variable, not two.
                      – Adriano Repetti
                      2 days ago






                      1




                      1




                      @Voo: There is no volatile in Reference Soruce either and there is no Invalidate in this version, so, the logic appears to be: 1. m_boxed can never be assigned until the Boxed is fully constructed (it is a class, not a struct) and 2. even if the effect is not immediately visible to all threads, they will just take the lock to find out that it was already created.
                      – firda
                      yesterday




                      @Voo: There is no volatile in Reference Soruce either and there is no Invalidate in this version, so, the logic appears to be: 1. m_boxed can never be assigned until the Boxed is fully constructed (it is a class, not a struct) and 2. even if the effect is not immediately visible to all threads, they will just take the lock to find out that it was already created.
                      – firda
                      yesterday




                      2




                      2




                      It would be better to post a follow-up question instead of multiple versions as answers with questions in it.
                      – Heslacher
                      9 hours ago




                      It would be better to post a follow-up question instead of multiple versions as answers with questions in it.
                      – Heslacher
                      9 hours ago


















                       

                      draft saved


                      draft discarded



















































                       


                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f207708%2fown-implementation-of-lazyt-object%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      android studio warns about leanback feature tag usage required on manifest while using Unity exported app?

                      SQL update select statement

                      'app-layout' is not a known element: how to share Component with different Modules