Harassment: ASML, Dependency Injection
Mon, Feb 24, 2025 ❝Attacks over opinions on CDI (or was it about influence?)❞Contents
This story is the root for many attacks, claiming I’m incompetent, claiming I’m starting up discussions to frustrate and delay the (in my words) “generally accepted enterprisey way of doing things”, not capable of understanding CDI, etc. I’d like to set the record straight.
Furthermore, without understanding how, several people outside of ASML suddenly commented on this and attacked me over this, so now I want to set the record straight. One of the comments was, again out-of-the-blue and weirdly specific, about a static dependency injection framework Dagger 2. So, to answer in hind-sight: sure, Dagger 2 would’ve worked, except in the places where D-I was needed, we had a framework available that we knew would work. (Well, could work, given a proper selection of dependencies.)
Context
So, as mentioned before, the whole code-base was originally set up for a server product. Consequently, CDI became rather ubiquitous throughout the code-base. This isn’t necessarily a problem, especially not when a (single) server product is the final product. However, shared use among different kinds of products, with different kinds of needs, this starts to pose a bottleneck.
Now, it’s also interesting to consider that, chronologically, these developments start taking place after having assisted in development issues like solving “puzzles” with 4-6 parametric types, support with (somewhat) more comprehensive git operations like interactive rebase
and bisect
and merge conflict resolution, having built and prepared multiple releases, and having fixed and improved on CI/CD, and fixed, improved and included Proguard in the team builds to ensure CDI doesn’t cause (nearly as many) issues due to dependency (version) mismatches, as well as the regular development work that is ongoing. There was also this several-weeks effort of a prototype implementation that tackled a very specific use case to experiment with at a customer. So I had gained some attention.
Limitations (or complications) of CDI
This caught my attention as the 5-day down-time hit. (See section “Trouble running release build”) CDI, something that otherwise seemed innocent enough when offered out-of-the-box, became the root of significant problem. My experience with CDI was thus far limited to my open-source contributions to Jitsi’s desktop-application, which is a chat-client and it used Apache Felix. When those CDI complications hit, I was equally stumped. Now, to be clear: not stumped in my understanding why it was a problem. That part is obvious. But, logging wasn’t too clear on which dependencies exactly were conflicting or what problem was cascading.
Note: I don’t actually know which CDI framework was used. It doesn’t really matter. The issue was due to too limited information and and a rather extensive listing of dependencies. Now, to the technical people who claim that it’s a lack of competence if this cannot be solved: I am sure that with a sufficiently detailed log-level, a few good runs, a listing of Maven’s dependency tree, etc. it should be possible to gain sufficient insight. Though, it wasn’t trivial, as there would certainly have been some other developers who would’ve quickly solved the problem. Regardless, it effectively still took multiple days.
However, it wouldn’t stay with that. When the various “small desktop tools” started getting significant traction, the need was there to introduce an application framework that integrated all of them. It was done in such a way that each tool would run inside the framework and the various development teams had their own tools to maintain and could extend the application with new tools. However, this is exactly the kind of set-up where CDI becomes more complicated. Various tools had different focus, so may on occasion end up with need for different implementations for a dependency. Furthermore, because CDI was applied throughout the whole code-base, everyone was fully dependent on it working and being available everywhere. The need for dependency injection resolution gets just a bit more complicated than a linear preferential ordering.
The awareness of this was present, but this isn’t the kind of thing that one changes easily, so it stayed with me as an after-thought.
Improvement working-groups
Now, as I’m at ASML for half a year or so, the department starts to grow in number of teams/developers, and there is some need for having a few working-groups of different kinds to coordinate and stimulate general improvements. A number of these improvement groups are created, among them one for the desktop-applications and one for the common “Litho InSight” shared components.
Now, given my previous effort in fixing and improving the build configuration, fixing generics puzzles with 4-6 parametric types, fixing and improving the Proguard configuration, etc. it would’ve been fairly logical that I could contribute best in the working-group that focused on the desktop-applications. However, there was quite significant push-back to that and I ended up representing our team in the server-/shared-components.
Given the amount of attacks and subtle malicious bullshit that has been going on, I suspect this was on purpose. However, if you do not subscribe to that theory, I can relate to that. Regardless, I ended up representing our team for the server components which often were also shared (utility) components needed in desktop-applications when logic is shared with the server. Effectively, I ended up in a working-group where there was comparatively little overlap with our team’s core focus. I have strong suspicions that this is related to the conflict with the team “architect” and have indicators that other actions (by PO) are for the same reason. (See other story …)
Consequences of CDI and different strategy
Now, I am quite certain that the plan was to put me in a spot where I could contribute little. And I probably screwed up that plan when I started questioning the use of CDI, the CDI problems that had occurred on various occasions, the use of “services” (utilities that required instantiation but usually had no state), and other such complications. I updated my knowledge because the way things were going, the CDI puzzle would only get more complicated to maintain and manage. We hit, at some point, a stumbling block where an annotated priority/preference didn’t suffice anymore.
My own focus, in part due to earlier generics puzzles, regularly checking up on compiler warnings, etc. was looking into how we can make the code better by writing it such that we can be sure that certain cases cannot happen. For example, eradicate null
at many places because fields are final
and the created/injected value always exists. Possibilities to strictly maintain class invariants, were crippled because field-injected instances couldn’t be checked on input from construction on. And, now we run into another stumbling block for how CDI was employed: virtually everything – if not literally everything – was injected directly into the fields. All over a large part of the code-base.
So we have: “services” that are constructed and injected necessarily, classes that block certain syntactic programming language capabilities because of (dynamic) field injection, etc. On top of that, people do not really understand what they’re doing. Why? Because, when I first proposed the idea to consider different injection strategies and/or use of the “dependency injection pattern”, the design pattern, several voices pointed out that the constructors would grow significantly. Now, this is true, of course. However, the only difference, is that it makes apparent (and transparent) how your code-base effectively is already tangled together. Furthermore, we would rely less on dynamic behavior and therefore have more static enforcement. Some had also not realized that many “services” could simply be static utility functions, which would in turn reduce the complexity of the classes.
Essentially, I’m advocating for:
- using all the features of the programming language that are available,
- relying on static type-safety to catch errors early,
- not making things more complicated and intransparent than necessary,
- not hiding complexities behind dynamic injection-“magic”,
- not take unnecessary risks with CDI frameworks (such as dependency resolution conflicts or injection preference complications) when we don’t have to,
- take back control of the code-base, by deliberately choosing which kind of injection we employ (which should almost never be field-injection)
- regain ability to use
final
fields, and class-invariants enforced as early as during construction. - reducing dependence on CDI would allow desktop applications to not use it at all. (This was explicitly facilitated at a later time.)
- more static detection of errors, due to removal of dynamic behavior, would make transitioning relatively painless.
- using constructor injection, would also mean that without availability of CDI, the constructor would still be properly defined for manual injection (i.e. dependency injection pattern)
Note: I will use the word “magic” on a few occasions, not because I don’t understand the mechanism. It isn’t that hard to figure out that one, using reflection, scan the code-base and match needs with offers. Similarly, that’s why the framework needs to initialize, scan the code-base, and be a part of instantiation such that it can exert control when/where necessary.
In many cases constructor-injection (for required dependencies) or method-injection (for optional dependencies) would be perfect for the job. I also advocated to consider the dependency injection pattern for uses deeper inside the code-base, because you would end up with plain and simple classes that just work. This is in no way “going back to the stone-age”, but rather more considerate and informed use of the capabilities of the language and libraries.
The use of the dependency injection pattern, would have no trouble with whatever dependency implementation you prefer. It simply accepts an instance through the constructor. You aren’t making anything more complex, because whether the dependency enters through “magic” or through constructor argument, is effectively the same: your class needs and acquires and uses this injected instance.
“Services”
As for the services. Whatever is static, is always accessible if the module itself is available. We focused on making stateless utilities static, because that would mean we can apply them whenever needed without a second thought and without the need to inject. And if the static utility does not suffice, as might be the case if you need to set up some preferences which would require state, let’s then start using the “service” and inject them as appropriate via the mechanisms available.
Conclusions
Now, to note, that this is not all solely my idea. These are things that one discusses among colleagues if only to gauge the impression of the circumstances, whether something like this was tried, etc. However, I seem to have had a better grasp of the core principles of what programming (and program structure) really is, as opposed to blindly accepting CDI as the way to go. One of the triggers was definitely the excessive null
-checking that spread throughout the code-base like a cancer if one cannot ever trust that an instance is present. I was definitely also triggered by the way CDI side-lines a number of static guarantees and programming language constructs because of its dynamic nature. So, I don’t claim sole ownership, but I certainly claim a significant contribution in this.
Claiming that this is less of an “enterprisey” approach may be fair, but then the “enterprisey” approach is what caused several problematic situations in the first place. Furthermore, as we do not necessarily eradicate CDI, we don’t disadvantage the server-code. It is, however, possible in certain cases to do everything without need for CDI. Working towards this goal would benefit the desktop application, because it wouldn’t need the CDI framework (library) at all. Which would significantly reduce start-up time and risks of complications.
By no means was this meant as a distraction, or am I unable of understanding CDI. I am, though, more considerate of the actual problem that I am solving. CDI for injecting dependencies from your context is fine, for some situations necessary. Let’s then use the mechanism that allows us to maintain as much control as possible. Let’s not arbitrarily inject everything everywhere, all reliant on dynamic (reflection) magic, as we side-line our own capability to understand the code and to reduce number of possible variations of what may happen.
As these changes were implemented, over the course of the years, we were able to reduce the need for dependency injection in the desktop applications. Where an incidental tool would still need it, we would programmatically make the CDI framework part of the tool’s initialization. We would initialize the framework in parallel with the Main Application (the encompassing application framework) itself to reduce start-up time, and of course most applications ran without it anyways.
Afterwards
Just now, writing this, I still think it may be nothing, except that he also demonstrated some commercial advertising framework he used on his personal site and it seems more likely he was just trying to get me to make bad or conflicting statements regarding these topics. I got later attacked, massively, on supposed “acknowledgements” and “agreements” when I was just having a normal conversation with someone and confirming that I was listening to his story and understood his position.
I also got attacked over “decision making”: these changes are rather broad and coordinated among teams. This is not done “on a whim” with all other teams just accidentally agreeing without thinking it over.
Side-note
Now, I want to point something out: I write this because I am attacked en masse for years aftewards. This is not to say “I didn’t get to do all the things I wanted”. That isn’t the problem. I found my way when I didn’t get into the ideal position too. I write this, because there was clear deliberate intentions to misrepresent my actions, attack me in various ways. And for a significant part, these actions were based on woefully misunderstanding the circumstances and out-right dismissal of any chance that I might have valid reasons and might not be “over-reacting”. Over the past 5 years it has become increasingly clear that it was easier to attack me over everything that to fix the mess.
Furthermore, for years I have been attacked over many false accusations which other people used as an “excuse” for themselves to be complete assholes to me. I know a lot of these things originated at ASML, and that (people at) ASML have not been kind to me, which is putting it very mildly. These things were more clear towards the end of the assignment, and I was too fucking naive to consider such attacks. That’s why I write this now.
Changelog
This article will receive updates, if necessary.
- 2025-02-24 Initial version.