Cute, but is this actually needed? It's one more thing to remember, one more thing to know the subtleties of, and for what? To save writing a very readable and unambiguous line of code?
It feels like the C# designers have a hard time saying "no" to ideas coming their way. It's one of my biggest annoyances with this otherwise nice language. At this point, C# has over 120 keywords (incl. contextual ones) [0]. This is almost twice as much as Java (68) [1], and 5 times as much as Go (25) [2]. And for what? We're trading brevity for complexity.
[0]: https://learn.microsoft.com/en-us/dotnet/csharp/language-ref... keywords/
> Cute, but is this actually needed? It's one more thing to remember, one more thing to know the subtleties of, and for what?
Hi there! C# language designer here :-)
In this case, it's more that this feature made the language more uniform. We've had `?.` for more than 10 years now, and it worked properly for most expressions except assignment.
During that time we got a large amount of feedback from users asking for this, and we commonly ran into it ourselves. At a language and impl level, these were both very easy to add in, so this was a low cost Qol feature that just made things nicer and more consistent.
> It feels like the C# designers have a hard time saying "no" to ideas coming their way.
We say no to more than 99% of requests.
> We're trading brevity for complexity
There's no new keyword here. And this makes usage and processing of `?.` more uniform and consistent. Imo, that is a good thing. You have less complexity that way.
I don't get this argument as it really doesn't match my practical experience. Using new C# features, the code I write is both easier to read and easier to write. On top of that it's less error prone.
C# is also much more flexible than languages you compared it to. In bunch of scenarios where you would need to add a second language to the stack, you could with C# still use just one language, which reduces complexity significantly.
I stumbled over this a few times, mostly when cleaning up older code. This basically just means that using the ?. member access no longer dictates what is possible on the right side.
Property reads were fine before (returning null if a part of the chain was null), method invocations were fine (either returning null or just being a no-op if a part of the chain was null). But assignments were not, despite syntactically every ?. being basically an if statement, preventing the right side from executing if the left side is null (yes, that includes side-effects from nested expressions, like arguments to invocations).
So this is not exactly a new feature, it just removes a gotcha from an old one and ensures we can use ?. in more places where it previously may have been useful, but could not be used legally due to syntax reasons.
> is this actually needed
Yes, actually. I did write it multiple times naturally only to realize it was not supported yet. The pattern is very intuitive.
Agreed, it appears that since they changed to early release their have become pressured to add new language features every year.
As polyglot I have the advantage that I don't have to sell myself as XYZ Developer, and increasingly I don't think C# (the language itself) is going into the direction that I would like, for that complexity I rather keep using C++.
Just wait for when extension everything, plus whatever design union types/ADT end up having, and then what, are they going to add on top to justify the team size, and yearly releases?
Despite my opinion on Go's design, I think the .NET team should take a lesson out of them, and focus on improving the AOT story, runtime performance, and leave the language alone, other than when needed to support those points.
Also bring VB, F# and C++/CLI along, this doesn't not have to be C# Language Runtime, where it gets all the features of what designed as a polyglot VM.
Nothing is stopping you from constraining the language version you want to be used in your projects:
https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...
You can force it all the way down to ISO-1/2.
If this is still insufficient, then I question what your goals actually are. Other people using newer versions of C# on their projects shouldn't be a concern of yours.
Oddly antagonistic take on a reasonable comment. GP could be working together with other people, for example, in which case every such idiosyncratic configuration introduces a little social and mental friction. This is covered at length in similar conversations in Go threads, where people say things like “defaults matter”.
Obviously you’re not alone to disagree, and there are even some good arguments you could potentially be making. But to say “I question what your motives really are” and tell someone what they should be concerned with is… odd?
It’s a very common position with ample practical examples. While there certainly are valid counter arguments, they are a little more involved than “nothing is stopping you.” There is. Collaborating with others, for example.
Easily said, applies to complaints in other languages as well, this is only doable for people that work alone.
Yes, this doesn't actually add anything to the "size" of the language, if anything it actually shrinks it. It's existing syntax (the ? and ?? operators) and existing semantics. The only thing was that it worked in half the cases, only reads but not writes. Now this completes the functionality so it works everywhere.
You can argue that C# gets a lot of new features that are hard to keep up with, but I wouldn't agree this is one of them. This actually _reduces_ the "mental size" of C#.
> his actually _reduces_ the "mental size" of C#
IDK, if you read
as "there is a now a ExponentialBackoffRetryPolicy" then you could be caught out when there isn't. That one ? char can be ignored .. unless it can't. It's another case where "just because it compiles and runs doesn't mean that it does the thing".Settings?.RetryPolicy = new ExponentialBackoffRetryPolicy();
This to me is another thing to keep track of. i.e. an increase in the size of the mental map needed to understand the code.
The description mentions side effects such as GetNextId(), but creating that object doesn't look like it has any side effects, so perhaps not the best example.
As I wrote in another comment, ignored side effects are perhaps the one questionable aspect of it. I usually assume RHS is evaluated first, regardless of what happens on the left - but I'm not sure that mental model was actually correct. But keeping that would mean having to simply do explicit if _when_ there are side effects. So
But for non-side effectsif (myObj is not null) myObj.Id = GetNextAvailableId(); // Side effect
It's obviously hard to know when there are side effects, but that problem already existed. You could trip this up before too e.g.Settings?.RetryPolicy = new ExponentialBackoffRetryPolicy();
Would have tripped you up. But at least it's obvious WHY it would. Something I have long sought in C# is a good way of tracking what is pure and what isn't.var id = GetNextAvailableId(); if (myObj is not null) myObj.Id = id;
Hi there! One of the lang designers here.
That's been part and parcel for C# for over 10 years at this point. When we added `?.` originally, it was its nature that it would not execute code that was now unnecessary due to the receiver being null. For example:
This would already not run anything on the RHS of the `?.` if `Settings` was null.Settings?.SetRetryPolicy(new ExponentialBackoffRetryPolicy());
So this feature behaves consistently with how the language has always treated this space. Except now it doesn't have an artificial limitation on which 'expression' level it stops at.
Life quality upgrade - needed? Depends.
Thanks for the upvotes! While testing and writing about the feature, I suspected it would receive mixed reactions.
The `?.` operator behaves similarly on the LHS to the RHS, making the language more consistent, which is always a good thing. In terms of readability, I would say that once you understand how the operator works (which is intuitive because the language already supports it on the RHS), it becomes more readable than wrapping conditionals in `if` statements.
There are downsides, such as the overuse I mentioned. But this is true for many other language features: it requires experience to know when to use a feature appropriately, rather than applying it everywhere.
However, the great thing about this particular enhancement is that it's mostly cosmetic. Nothing prevents teams from not adopting it; the old syntax still works and can be enforced. C# and .NET are incredibly versatile, which means code can look dramatically different depending on its context and domain. For some projects, this feature might not be needed at all. But many codebases do end up with various conditional assignments, and in those cases, this can be useful.
I’m having a hard time imagining where this is useful. If I’m trying to assign to a property, but encounter an intermediate null value in the access chain, just skipping the assignment is almost never going to be what I want to do. I’m going to want to initialize that null value.
I'm also not sure I have a lot of code where this would be useful, but adding it to the language I don't feel makes it worse in any way; in fact, it makes it more consistent since you can do conditional null reads and conditional null method invocations (w/ `?.Invoke()`), so why not writes too.
“Why not?” is never a good-enough reason to add a new language feature.
If it’s rarely used, people may misinterpret whether the RHS is evaluated or not when the LHS doesn’t exist (I don’t actually know which it is).
Optional operations and missing properties often require subtle consideration of how to handle them. You don’t want to make it too easy to say “whatever”.
> people may misinterpret whether the RHS is evaluated or not when the LHS doesn’t exist
I fully expect no RHS evaluation in that case. I think the fear is misplaced; it's one of those "why can't I do that when I can do this" IMO. If you're concerned, enable the analyzer to forbid it.
There are already some really overly paranoid analyzers in the full normal set that makes me wonder how masochistic one can be...
improving crappy codebases without breaking anything. Bad .NET developers are forever doing null checks because they write weird and unreliable code. So if you have to fix up some pile of rotting code, it can help you slowly iterate towards something more sane over time.
For example in my last gig, the original devs didn't understand typing, so they were forever writing typing code at low levels to check types (with marker interfaces) to basically implement classes outside of the classes. Then of course there was lots of setting of mutable state outside of constructors, so basically null was always in play at any moment at any time.
I would have loved this feature while working for them, but alas; they were still on 4.8.1 and refused to allow me to upgrade the codebase to .net core, so it wouldn't have helped anyway.
These null checks are actually for Optionals in the type system. The whole standard library and many better packages use nullability and thus indicate what can and cannot be null ever. And structs can never be null.
So no, c# are not constantly null-checking more than in Rust
Unfortunately, I suspect this will just makes it easier to keep writing sloppy code.
Monad-maxxing has ruined many a language
This is a functor, not a monad. Also, it's implemented really poorly. If only more languages actually implemented monads well. You wouldn't need special case junk like this.
Sorry, I would flag this in a code review. It's too easy to skip past this visually. Explicit if statements make it a lot more obvious what's going on. This is too much syntactic sugar.
While I can understand the reason behind the behaviour I cannot find it intuitive.
If I say an asseignment I expect the value to be "evaluated".
I could have grasped the expression in all the null values would have been replaced with new instances, but then it would have been too much invasive and magic to work, so - again - I understand why the designers night have been force to settle for the actual behaviour...
But maybe then the half-solution is not worth it
More concise? Yes.
More readable? I'm less convinced on that one.
Some of those edge cases and their effects can get pretty nuanced. I fear this will get overused exactly as the article warns, and I'm going to see bloody questions marks all over codebases. I hope in time the mental overhead to interpret exactly what they're doing will become muscle memory...
When the first wave of null check operators came out our code bases filled up with ? operators. I luckily had used the operator in swift and rust to somewhat know what it can do and what not. Worse the fact that unlike rust the ? operator only works on null. So people started to use null as an optional value. And I think that is at the core the problem of the feature. C# is not advertising or using this themselves in this way. I think the nullable checks etc are great way to keep NPE under control. But they can promote lazy programming as well. In code reviews more often than not the question comes up, when somebody is using ? either as operator or as nullable type like ‘string?’, are you sure the value should be nullable? And why are you hiding a bug with a conditional access when the value should never be null in the first place.
And more better? I'm not sure either.
In all these examples I feel something must be very wrong with the data model if you're conditionally assigning 3 levels down.
At least the previous syntax the annoyingness to write it might prompt you to fix it, and it's clear when you're reading it that something ain't right. Now there's a cute syntax to cover it up and pretend everything is okay.
If you start seeing question marks all over the codebase most of us are going to stop transpiling them in our head and start subconsciously filtering them out and miss a lot of stupid mistakes too.
This is something I see in newbie or extremely lazy code. You have some nested object without a sane constructor and you have to conditionally construct a list three levels down.
This is a fantastic way to make such nasty behavior easier.
And agreed on the question mark fatigue. This happened to a project in my last job. Because nullable types were disabled, everything had question marks because you can't just wish away null values. So we all became blind and several nullref exceptions persisted for far too long.
I'm not convinced this is any better.
Swift has had this from the beginning, and it doesn’t seem to have been a problem.
What?.could?.possibly?.go?.wrong?.
if (This) { if (is) { if (much) { if (better) { println("I get paid by the brace") } } } }
False dichotomy. The problem is that the syntax implements a solution that is likely wrong in many situations and pairs with a bad program design. Maybe when we have this:
Maybe we want code like this:what?.could?.possibly.go?.wrong = important_value()
and not code which throws away the value (and possibly its calculation).if (!what) what = new typeof(what); // default-construct representative instance if (!what.could) what.could = new typeof(what.could); if (!what.could.possibly.go) what.could.possibly.go = new typeof(what.could.posssibly.go) // now assignment can take place and actually retain the stored value // since we may have allocated what, we have to be sure // we propagate it out of here. what.could.possibly.go.wrong = important_value();
Why would you ever write an assignment, but not expect that it "sticks"? Assignments are pretty important.
What if someone doesn't notice the question marks and proceeds to read the rest of the code thinking that the assignment always takes effect? Is that still readable?
> Maybe we want code like this
It should be clear enough that this operator isn't going to run 'new' on your behalf. For layers you want to leave missing, use "?.". For layers you want to construct, use "??=".
> Why would you ever write an assignment, but not expect that it "sticks"? Assignments are pretty important.
If you start with the assignment, then it's important and you want it to go somewhere.
If you start with the variable, then if that variable doesn't have a home you don't need to assign it anything.
So whether you want to skip it depends on the situation.
> What if someone doesn't notice the question marks and proceeds to read the rest of the code thinking that the assignment always takes effect? Is that still readable?
Do you have the same objection with the existing null-conditional operators? Looking at the operators is important and I don't think this makes the "I didn't notice that operator" problem worse in a significant way.
Just because you can't do assignments like that, it doesn't mean you shouldn't use null coalescing for reads. What exactly could go wrong?
Paranoid null checking of every property dereference everywhere (much?.like?.in ?.my?.joke) whether each is ever possibly null or not, usually combined with not thinking through what the code behavior should be for each null case.
(Gets a lot better if you enable nullable references and upgrade the nullable reference warnings to errors.)
I wonder, does the important_value function get called and the value discarded or never called at all? Looks like a footgun if it has side-effects.
NullReferenceException, in line 7.
you didn't null check possibly.go.
if (!same) { return; } if (!number) { return; } if (!of_braces) { return; } println("but easier to read")
Yes, you should definitely unnest functions and exit early. But the null-coalesced version is shorter still.
if (Actually && Actually.you && Actually.you.would && Actually.you.would.write && Actually.you.would.write.it && Actually.you.would.write.it.like) { return this; }
Nothing to worry about:
Not so convinced:What?.could?.possibly?.go?.wrong?
Maybe the design is wrong if the code is asked to store values into an incomplete skeleton, and it's just okay to discard them in that case.What?.could?.possibly?.go?.wrong = important_value()
Oh come on just learn it properly it's not a big deal to read it
I'm a Java fan so I'm contractually required to dis c#, but actually I kinda like this. It reduces boilerplate. Yes, it could be abused but this is what code review is for.
Why the requirement, because of J++ and how Ext-VOS alongside Cool became .NET?
Most companies don't care about this kind of stuff.
I work across Java, C#, JS/TS, C++, SQL, and whatever else might be needed, even stuff like Go and C, that I routinely criticise, because there is my opinion, and then there is the job market, and I rather pay my bills.
You’re not wrong. Every language feature that gets added there’s someone who wants to stop the clock and hold the language definition in place because “people might misuse it” or “people might not be familiar with it”. It’s not language specific, it’s everywhere.
Still, enabling ?. Access on the left side of the equals (assigning) feels like a serious anti pattern to me
I struggle to even see how anyone would prefer that over an explicit if before assigning.
Having that on the right side (attribute reference) is great, but that was already available as far as I understood the post...
Without it there's some silly inconsistency. For example I could call `person?.SetName(name)`, but if you wanted to refactor that into `person?.Name = name` you can't.
That's a great point I didn't think about. From that perspective, it does make sense.
Maybe my feeling is just rooted in the fact I've never used a language which allowed ?. on assignment
My take is that it’s pretty minor. Modern C# has across the board null checking and for the most part you’re not designing things where this even comes up. You are, however, correct, in that I have 100% seen the ?SetName thing used by devs who just wanted to make the null checker go away and didn’t actually think about what the correct behaviour was.
As someone who comes from a language with no ? (or equivalent) who only dabbles in C#, it actually seemed a little weird to me that this was one of the contexts where it wasn't usable.
So as a casual observer, I'd say it brings more consistency.
But also as a casual observer, my opinion is low-value.
The point the article is trying to make is that it reduces boilerplate, wouldn't be surprised if this gets added to TS in the next year of two.
It's starting to feel like C# is going down the path of C++. Tons of features that introduce subtleties and everybody has their own set of pet features they know how to use.
But the code gets really hard to understand when you encounter code that uses a subset you aren't familiar with. I remember staring at C++ codebases for days trying to figure out what is going on there. There was nothing wrong with the code. I just wasn't too familiar with the particular features they were using.
There's a couple reasons I disagree with you on this (at the moment; as given enough time I am sure C# will also jump the shark):
* The above is just applying an existing (useful) feature to a different context. So there isn't really much learning needed, it now just 'works as expected' for assignments and I'd expect most C# engineers to start using this from the get go.
* As a C# and C++ developer, I am always excited to hear about new things coming in C++ that purportedly fix some old pain points. But in the last decade I'd say the vast majority of those have been implemented in awful ways that actually make the problem worse (e.g. modules, filesystem, ...). C#'s new features always seem pretty sound to me on the other hand.
The difference is the language syntax choices are good. There's no "what does this const refer to" type confusion.
Agreed about the syntax choices. Much better than C++. The language is just getting a little too big for my taste.
At a social level, I 100% agree with you because I’ve started to see those behaviours in the community. But considered technically, C++ is on a whole different level from C#. The community seems to embrace “What does this print?” style puzzles, and figuring out when perfect forwarding or SFINAE kick in is genuinely tricky.
Static abstract methods are probably the feature I see used least (so far!) and they’re not nearly as hard to understand as half of the stuff in a recent C++ standard.
A C# dev can't complain, because complexity creates job.
I don't really get this obsessive insistence of purging the language of null checks. (And "if(x is not null)" is not an improvement of any kind)
It feels like Microsoft just wants C# to be Python or whatever and the language is losing its value and identity. It's becoming bland, messy, and complicated for all the same reasons they keep increasing padding between UI elements. It's very "me too" and I'm becoming less and less interested in what I used to consider my native language.
I used to write any and all little throwaway programs and utilities in C#, but it just keeps getting more and more fussy. Like Python, or maybe java. Nowadays I reach for C++. It's more complicated, but at least it's stable and not trying to rip itself apart.
> wants C# to be Python or whatever
Oh, how happy would I be if Python had a sliver of C# features. There's nothing like null-conditionals in Python, and there are many times I miss them
That as much is true, that is why stuff like minimal APIs and now aspire came to be, they want to cater to the JavaScript and Python folks, they have said as much in a couple of .NET podcast interviews.
Additinoally, I think they are becoming hostage that every year C# gets a new release, thus the team has to keep pushing features no matter what.
Imagine how C# will look a decade from now with this rythm.
Slowly I am starting to think it is not that bad, that most of the .NET projects that our agency does are still stuck on Framework.
Love to see conciseness for the sake of readability. Honestly I thought this was already a thing until I tried it a year ago…
I’m glad it’s now a thing. It’s an easy win, helps readability and helps to reduce the verbosity of some functions. Love it. Now, make the runtime faster…
I’d rather be explicit. If the value is null then it should be explicitly handled.
I feel like this is another step in the race to add every conceivable feature to a language, for the sake of it.
This is explicit though. The question mark operator is the developer explicitly asking for this behavior.
I wonder if this supports a cleaner way to throw when the target property's parent object is null? With null-coalescing assignment, you can do the following which will throw when 'x' is null:
It would be interesting to try something like:string x = null; string y = x ?? throw new ArgumentException("x is null");
But I don't know how the language would be able to determine which potential null it was throwing for: 'customer' could be null, but so could 'newName'. I guess... maybe you could do:customer?.Name = newName ?? throw new InvalidOperationException("customer is null");
But the language already supports that, and it's extremely ugly...(customer ?? throw new InvalidOperationException("customer is null")).Name = newName ?? throw new ArgumentException("newName is null");
At least so far, my instinct is that we should turn this off/ ensure it is never turned on, as it seems likely to be a foot gun.
I couldn't imagine what a "Null-Conditional Assignment" would do, and now I see but I don't want this.
Less seriously, I think there's plenty of April Fools opportunity in this space. "Null-Conditional Function Parameters" for example. Suppose we call foo(bar?, baz?) we can now decide that because bar was null, this is actually executing foo(baz) even though that's a completely unrelated overload. Hilarity ensues!
Or what about "Null-Conditional Syntax". If I write ???? inside a namespace block, C# just assumes that when we need stuff which doesn't exist from this namespace it's probably just null anyway, don't stress. Instead of actual work I can just open up a file, paste in ???? and by the time anybody realises none of my new "classes" actually exist or work I've collected my salary anyway.
This sounds like a shortcut, unless it isn't.
I have a feeling this is going to make debugging code written just a few months ago incrementally difficult. At least the explicit if statements are easier to follow the intent from months ago.
The syntax is clean though. I'll give it that.
I'm working on a Unity game and I'm so annoyed I can't use all of the new fancy c# features
Like Ruby safe navigation operator `&` and kotlin, groovy and swift's `?`
Looks interesting & I'm excited to try this out myself. I like the more verbose null/error handling personally in professional code, but maybe that's because im still working in framework! I'll certainly be using these in my personal projects that'll be on .NET 10
You can use newer LangVersion in framework too.
I'm looking forward to being able to use this. It doesn't sound like much but those extra three lines and variable assignment is duplicated a ton of times across our codebase so it'll be a nice change
While this is nice, there are some long requested features like Discriminated Unions that got delayed a lot.
Hi there. C# lang designer here :)
Discriminated unions continue to be worked on, and you can see our latest designs here: https://github.com/dotnet/csharplang/blob/main/proposals/uni...
The space there is large and complex, and we have a large amount of resources devoted to it. There was no way that `a?.b = c` was going to change if/when unions come to the language.
For unions, nothing has actually been delayed. We continue working hard on it, and we'll release it when we think it's suitable and ready for the future of the lang.
The only feature that actually did get delayed was 'dictionary expressions' (one that i'm working). But that will hopefully be in C# 15 to fill out the collection-expression space.
Discriminated unions are a hard feature to put into a mature language that already has other features in a similar space - I mean enums and class hierarchies. (.e.g. if your union is "Cat or Dog", then in OO terms they have a common base class "Animal" ). How does it play with records, structs, generics, etc etc.
That is why although they are much requested, none of the proposals that I have seen are simple to understand or easy to implement, and thus are proceeding slowly.
I don't really see Discriminated union as being in "competition" with "a?.b = c" as that's a "quick win" extension to previous ?. and ?? syntax. It's not even close to being of the same magnitude.
I would settle for a good built-in Result<T, E> type, so that people don't roll their own crappy ones, or use various opinionated but incompatible libraries.
> if (customer?.Profile is not null) >{ > // Null-coalescing (??) > customer.Profile.Avatar = request.Avatar ?? "./default-avatar.jpg"; >}
Isn't this over engineered? Why not allow the assignment but do nothing if any of the intermediate objects is null (that's how Kotlin does it).
That's what's new in c#14, it allows you to do
And will do nothing if the left hand side is null (not throw a null reference exception anymore)customer?.Profile?.Avatar = "thing"
Nice feature! (we had in Ruby for many years)
So like ruby's '&.' null safe chaining
Yes but they already had it for non-assigning uses.
Isn't this more confusing? Because it skip the code if the value is null and I don't think it is normal to follow the flow assuming nothing has happened.
It kind of is more confusing because I always imagined the RHS to be evaluated first in an assignment, before the target is evaluated.
The motivation is that you don't want the side effects in some cases like GetNextId() but I think it's still strange. I hacven't thought deeply about it but i _think_ I'd rather keep the intuitive right-hand-first evaluation and explicitly have to use if (..) in case I have a RHS whose side effects I need to avoid when discarded.
That's already the case for the null coalescing operator when it ends in a method call: the method call is skipped if the base is null. For instance, we can invoke event handlers with "myEvent?.Invoke(...);" and the call will be skipped if there are no event handlers registered, and this is the canonical way to do it.
From the article:
> If config?.Settings is null, the assignment is skipped.
If the right hand expression has side effects, are they run? I guess they do, and that would make the code more predictable.
From the article as well:
Side-Effect Prevention When a null-conditional statement assignment is evaluated, the right-hand side of the expression is not executed unless the left-hand side is defined.
Thanks. I missed it.
I really dislike that, because it hides the control flow too much. Perhaps I'm biased by Racket, where it's easy to define something weird using macros, but you should not do unexpected weird things.
For example you can define vector-set/drop that writes a value to a position of a vector, but ignores the operation when the position is outside the vector. For example
With a macro is possible to skip (print "banana") because -2 is clearly out of range, but if you do that everyone will hate you.(vector-set/drop v -2 (print "banana"))
I think in the context of C# this is not exactly unexpected since there are various operators that apply conditional execution (and evaluation) to parts of an expression. All of them start with ?, though (? :, ??, ?., ?[], ??=), so except for casts to nullable types, every question mark in an expression signals that control flow is about to happen and it's been that way in C# for quite a while.
It’s for the use case where they’d skip it anyway so that would be intended behavior.
> I don't think it is normal to follow the flow assuming nothing has happened.
I think it is for situations where the programmer wants to check a child property but the parent object may be null. If the parent is expected to be null sometimes, the syntax lets the programmer express "try to get this value, but if we can't then move on" without the boilerplate of explicitly checking null (which may be a better pattern in some cases).
It's sort of like saying:
- Get the value if you can, else move on. We know it might not be there and it's not a big deal.
v.s.
- The value may not be there, explicitly check and handle it because it should not be null.
Your summary is almost correct but replace where you used "get" with "set".
never using nulls is liberating. this is syntactic sugar for dealing with nulls. definitely welcome though, and definitely will be abused (which cant be done when nulls are actually banished)
When you don't, you'd likely have an Maybe<string> for a username that would be a "string?" in C#. It's the same thing (more or less). For your optional type you still need some form of pattern matching to figure out whether you have it or not.
You can't banish the "absence of value" from any programming language. That wouldn't be a useful language. You can stop confusing "a string but perhaps not" as a single type "string" as C# did in the past though.
Welcome to PHP land :P
Null conditional assignment is bunk.
When you have an expression P which names a mutable place, and you execute P := X, the contract says that P now exhibits the value X, until it is assigned another value.
Conditional assignment fucks this up. When P doesn't exist, X is not stored. (Worse, it may even be that the expression X is not evaluated, depending on how deep the fuckery goes.)
Then when you access the same expression P, the conditional assignment becomes conditional access and you get back some default value like a nil.
Store X, get back nil.
That's like a hardware register, not a durable memory model.
It's okay for a config.connection?.retryPolicy to come up nil when there is no config.connection. It can be that the design makes nil a valid retry policy, representing some default. Or it could be that it is not the case, but the code which uses connection? handles the nil soon afterward.
But a store to config.connection?.retryPolicy not having an effect; that is dodgy.
What you need for config.connection? to do when the expression is being used to calculate a mutable place to assign to is to check that config.connection is null, and in that case, instantiate a representative instance of something which is then assigned to config.connnection, such that the config.connection.retryPolicy place then exists and the assignment can proceed.
This recognizable as a variation on COW (copy-on-write); having some default instance for reading, but allocating something on writing.
In a virtual memory system, freshly allocated memory can appear to contain zero bytes on access due to all of its pages being mapped to a single all-zero frame that exists in the entire system. Conceptually, the hardware could do away with even that all-zero frame and just have a page table entry which says "this is a zero-filled page", so the processor then fakes out the zero values without accessing anything. When the nonexistent page is written, then it gets the backing storage.
In order to instantiate settings.connection? we need to know what that has to be. If we have a static type system, it can come from that: the connection member is of some declared type of which a representative instance can be produced with all constructor parameters defaulted. Under a dynamic paradigm, the settings object can have a handler for this: a request to materialize a field of the object that is required for an assignment.
If you don't want a representative config.connection to be created when config.connection?.retryPolicy is assigned, preferring instead that config.connection stays null, and the assignment is sent to the bit buckets, you have incredibly bad taste and a poor understanding of software engineering and programming language design --- and the design of your program is scatter-brained accordingly.
> When you have an expression P which names a mutable place, and you execute P := X
This isn't the case, though, is it? A normal member access (or indexer) expression may point to a mutable location (field, property). However, with conditional access expressions you get either a member access _or nothing_. And that nothing is not a mutable place.
When you use any of the conditional operators, you split the following code into two paths, and dropping the assignment (since there's nothing to assign to) seems pretty consistent to me, since you'd also drop an invocation, or a property evaluation in similar expressions.
If you require P to point to something that actually exists because you want the assignment to succeed, then write code to ensure that P exists because the assignment has no way of knowing what the intention was on the left side.
This is syntactic sugar only. No change in semantics. So I don't buy into any argument about what this change 'means'.
And we all get to choose what we find ridiculous:
i = i + 1 ? No it does not. Never has, never will.
Connection is null? It's insane to type it as Connection then. null has type Null.
A representative config.connection being made out of nothing sounds pretty bad to me. If you want to make sure the value doesn't disappear, you shouldn't be using conditional assignment in the first place.
The config example isn't the best, but instead imagine if it was just connection.?retryPolicy. After you set connection?.retryPolicy it would be weird for reading it back to be null. But it would be just as weird for connection?.retryPolicy to not be null when we never established a connection in the first place.
The copy on write analogy is tempting but what you're describing only works when the default value is entirely made of nulls. If you need anything that isn't null, you need to actually make an object (either upfront or on first access). And if you do that, you don't need ?. anymore.
Someone is going to run into a null exception in an assignment and just throw in the question mark to shut it up, not thinking about the value disappearing.
That's the mindset the feature is developed for (and by).
Apparently you do t use if in your code?
Is .NET entering its twilight years as a tech people build new things with?
I just can't imagine Gen Z wanting to start a project in C#.
I realise there are still .NET shops, and I still talk to people who do it daily, but ours is a field driven by fashion whether we care to admit or not - and C# just does not feel as fashionable as it once did
(I'm a former C# dev, up until 2020)
I personally can't think of an all-rounder language that is better than C#. It's fast, has great tooling, powerful, extremely productive for working with large code bases and runs 'anywhere'.
JS has lost against TS which is basically C# for web (both designed by the same person) and Python is not really something you should build large applications with (execution speed + maintenance issues).
What do you believe is the current language du jour?
Golang is often the default now. As someone who’s new to backend development, I’ve been exploring C# and can’t understand why it’s not the default. I think C# primarily has a marketing problem.
I personally can't think of an all-rounder language that is better than C#
I didn't ask about whether it was good.
I asked about whether it's past its peak.
It's certainly reaching "maturity". And I'd say it's improving still but the improvements are small (which is expected at that point).
I wouldn't say it's past its peak because it's still improving and there is no good alternative for a language of its class. Go isn't it (I doubt there will be a good desktop/mobile app/game engine etc story for Go in the future). Swift could have been a competitor in the allround space but Apple doesn't seem interested in conquering the world outside its own garden. I'm not sure who it would be that would make the "next" C# and .NET. Only Microsoft and Apple are making commercial desktop environments, for example.
I've been using .Net for almost 20 years, professionally for half that time, and I feel like excitement and momentum in the community has only been increasing.
Same story here. I decided lately to focus more on .net and, let's say, abandon Java.
It's portable, fast, productive and well supported by a massive corp. It's not just a "language du jour", it's here to stay.
There are plenty of job in dotnet where I live: old, new, startups...
I am the momentum!
I'd say it's never been better tbh. I can't speak for Gen Z but .NET (for some reason) was never the choice of startups. Possibly because there is still a cost associated with the best developer experiences, such as for the best IDE's in editions that allow any size of for-profit org.
Well, JavaScript is older than C# and still poppin. Gen Z eats it up. C# too, for unity gamez.
It is definitely out of fashion, most directly in comparison to Go I suppose. It seems like they tried with .NET Core, but were not able to provide an appealing and coherent enough on-ramp. The ongoing death of native windows applications not helping either certainly.