And that's why I said that I didn't want to waste more energy on this, we both clearly have completely different coding philosophies and we will never see eye to eye, at least for the time being,
but I will make this post to make it as clear as possible, even for potentially others to understand where I am coming from.
You're clearly very oldskool and you're treating code as if it needs to run in machines with weak CPUs and 32MB of RAM, so you will try all the possible "hacks" and even create redundant code with no possibility for expansion, just to save a meaningless tiny bit of memory and to cut a few CPU cycles.
I, on the other hand, realized a while ago, that by doing so you get to a point where the code just becomes "garbage", no matter how simple and nice it looks at first, if it causes the entire architecture of a system to be hard-limited and have issues and limitations, it's just not good code at all, period. Even if that tiny piece of code runs flawlessly by itself and seems to cause no harm, that doesn't mean it's good code from the overall architecture point of view, and this is seen especially when you want to change it, expand it or simply interact with it, and there's where you will spend time having to potentially refactor everything.
Sometimes hacks and quick solutions are needed, when there's no other way, but when there's another more correct way, they should be completely avoided.
That's why I mentioned "premature optimization", which is exactly what you're advocating for, either you realize it yourself or not: you should only optimize when you actually *need* to optimize, not before, this is what "premature optimization" stands for, given that before the optimization you should focus to have a good code architecture to begin with.
This does NOT mean that you should just code away disregarding performance, not at all, it means the opposite in fact, it means instead that performance considerations should be part of the design overall, but as the big overall picture and not small specific optimizations here and there, even because "performance" is not the bounding factor anymore in coding nowadays, it's "change".
Hence what I proposed was not a premature optimization, it was the very opposite, it was
to use what already existed the way it was meant to be used, rather than duplicate the code just to avoid to spawn an actor which does the same.
It all goes down to DRY, SOLID, composition over inheritance, and so on, basic and proven good programming principles which maybe to yourself are just "theories" and whatnot, and which are completely unknown to most of the community even, especially with an engine which does not follow either of them at all, but if the community in general keeps seeing these kinds of "solutions", implementing them "as is", and not start to follow these, they will always end up having all sorts of basic bugs and limitations in their code they wouldn't have otherwise, and it will look more and more like dark magic rather than something technologically sound, to not mention that mods won't play well together either (we already have this problem nowadays, and make no mistake, I am equally as guilty of it myself from the time I didn't know any better either).
It's a matter of principle, and looking ahead at the big picture that you're drawing, rather than the specific brushes, lines and colors you're using to paint it, that's all.
I believe you're just being shortsighted and not seeing the overall picture overall, and that's why we won't be able to see eye to eye.
Not that there aren't other examples even in this forum from other programmers doing similar things, but you being a new member, I thought it was worth a shot to check what kind of programmer you are, and if you were the kind we could have a productive exchange with, for both sides.
And on a last note concerning actors overall, the wiki is not completely accurate (just like any wiki for that matter), for instance:
For example, do not try to create a particle system by spawning 100 unique actors and sending them off on different trajectories using the physics code. That will be sloooow.
and guess what, that's a terrible example, as this is not accurate.
While
it's true that actors are heavy-weight objects, given their long initialization process, with tons of needless properties which require memory to be allocated and CPU to be checked for replication, being part of a global actors list which influence how fast an AllActors iterator runs (and other similar ones), ticked every frame (unless set otherwise), truth is that even a number such as "100" didn't make any sense even 10 years ago, and I have personally proven this time and time again by doing things which went way over that value in the total number of actors doing physical things, in the order of thousands at times, with no issues. Because no matter how a bad of a picture the wiki makes out of actors and iterators in general, they're actually not that problematic overall, at least 10 years ago, and certainly not nowadays, they make an exaggeration of the overall performance of things.
Even machines from 10 years ago were fast enough to deal with these things, and even the game as it is (which goes full circle with the entire premature optimization thing) actually spawns a lot of stuff even at a standard level: projectiles, effects, gibs, etc, with absolutely no recycling at all. Even weapons and pickups, as whenever you pick one it spawns a brand new "copy" of it.
If this worked from the get-go for them with the limited hardware at the time, why are we trying here to avoid to spawn a *single* actor just because it's considered "heavy-weight" by a wiki which is not fully accurate to begin with?
Especially when the weight of this actor is no different from everything else that the game itself spawns in large quantities at times anyway?
All I am asking you is to look at the big picture, and not keep indulging in practices which violate things like DRY and SOLID, just for the sake of micro-optimizations.