I want to split my initialization trigger in several blocks to clear my code up a bit. I see blizzard maps using both triggers and trigger run for splitting this (StarCraft Master, Aiur Chef, Left 2 Die) and action functions (StarJeweled). I thought action functions would be the best method but now I am not so sure anymore. Would appreciate it if someone could clear this up a bit.
I'm not sure I understand what you are asking for, but if you are trying to do something once something else happens, and you want to make it into multiple triggers, you could do this: Assuming you have all of the triggers made, go to the left side and right click the triggers you want to happen after something else happens. Then go down to where it says something like "Initially On" and click that so that it is unchecked. This will allow the trigger not to run until you tell it to. Then you just make if statements in your main trigger to find when something should happen and use the run trigger action to run the trigger.
That's not really what I meant, but thanks for the information anyways. :)
I'll try to clarify with an example:
I have one trigger that is run when the game initializes. That trigger does things like setup the AI, light up monolyth bridges, set up UI, set up revealers, etc. Right now, all those setups are in one trigger but I want to have a separate "block" for each. I want to know whether it's better to seperate it into action functions or triggers.
You can have multiple "map init" triggers. It really doesn't hurt anything. I personally use Action Definitions; because it makes me feel fancy. My init trigger just says to run a dozen action definitions. The biggest problem with this (really the only problem) is that it can run multiple action definitions at once if you have (create new thread) enabled on them; which many people do. It will run the actions side by side. The problem being if you use variabels set from 1 of them in another. It may take a little bit of organizing.
Anywho; with that pre-text:
create action definitions and use them in the same way you would create more map init triggers. paste a block of code into one, then in the map init trigger, run the action (whatever you named the action definition will be in the action list).
I am not sure there is a notable difference between the two; I have tested it a bit and there is nothing amazing showing up in the debugger; in terms of run time.
I would only create actions for things that need to be run multiple times and have different parameter sets (or need to return a value but that's not important in this context).
For example, I might create an action that respawns a hero.. complete with fancy graphics and sounds and camera movement. Heroes die a lot.. this would be ran multiple times. And the parameter of the action would be which hero needs to be respawned... or possibly the player who's hero needs respawning depending on how I set it up.
I think triggers are better suited to the task here. Triggers may or may not run multiple times, but the defining difference is they have no parameters (and they take events but that's not important in this context either). I would create an Init folder and put an Init trigger as the first item then a trigger for each block - one for the dialog setup, one for bridges, one for AI, etc.
Your Init trigger would have the Event: Map Initialization. And all it would do is call Run Trigger on the rest of the triggers.. in the appropriate order that they should be called in.
Your other triggers in the Init folder would have no event and rely on the Init trigger to run them.
If you have a trigger that doesn't depend on anything from any other trigger.. ie, it doesn't matter if the ai is setup before or after the bridges.. then you can say "Don't Wait" when you use the Run Trigger action.
Actions, they are faster and produce less code. No reason to use a trigger without events, except a run action needs to be changed at runtime.
My advice: Define "event handlers", meaning ONE trigger per general event, such as unit dies, and from there execute actions (Passing triggering unit as parameter). Fastest and cleanest way of doing things. Also makes it easy to change order of execution and/or to see what exactly happens from that event on.
Miles, is correct, about event headers. you then use algorithms in the event header to fire off functions or other triggers. Yes new triggers and fuctions are slower but it slows down the game soo little that i wouldnt even worry about it, the organization of code is more important.
Actions are slimmer, I suppose, since an action is simply a function and a trigger is, I believe, 2 functions and a trigger object. I'm not sure the difference is quantifiable though.. haven't come across any benchmarking for sc2 stuff.
I guess my only advantage to using a trigger over an action here is that it doesn't add even more items to the already very lengthy action list.
If you're really concerned with performance, you should keep it all in a single map trigger (or better yet, write directly in galaxy using native functions).
I would prefer triggers in this case but I can understand the argument for actions trying to be a compromise between performance and organization. I'll just say, until I can see benchmarking on it, the difference is likely a fraction of a millisecond on something that's only called once so how much does it really matter. :)
That's seems like a really nice way to handle events. Makes sense not to have two triggers track the same action. I'll start merging events and use that from now on.
For the initialization, I do like the idea that the "initialization blocks" aren't added to the actions list. Is there some way to hide those functions from the list after I've used them?
For the initialization, I do like the idea that the "initialization blocks" aren't added to the actions list. Is there some way to hide those functions from the list after I've used them?
My init trigger just says to run a dozen action definitions. The biggest problem with this (really the only problem) is that it can run multiple action definitions at once if you have (create new thread) enabled on them; which many people do. It will run the actions side by side. The problem being if you use variabels set from 1 of them in another. It may take a little bit of organizing.
I don't think it's really running side by side. It's very deterministic in which order the code is executed:
Triggers above other triggers in the code run before these other triggers when their events fire in the same event. Already running threads/triggers are executed before new triggers. Threads started from triggers are executed last. Threads executing a wait(0) will continue to execute their code after all existing threads.
So, actually, threads are executed in order of a list.
I'll just say, until I can see benchmarking on it, the difference is likely a fraction of a millisecond on something that's only called once so how much does it really matter.
In general this is correct, however there are some scenarios where you can really benefit from using action calls, such as "unit gets damage" or other very frequently called events, especially if the actual code is quite fast, too.
Actons are faster for multiple reasons:
- TriggerExecute() is way slower than just calling a function
- They return void.
- They can check conditions faster than triggers, because the unnecessary negation is missing (Which is used by trigger conditions)
- Event properties just need to be read out once, stored in a variable and then passed by parameter, such as "Triggering Unit". All of those event responses are actually functions which are slower to call than reading variables/parameters, so the more often you need to access event properties the more actions will pull ahead.
Also, as mentioned, the event handler approach produces way way less code, because
- Events dont get added to triggers multiple times
- The action calls itself contain way less galaxy code than triggers, because the trigger initialisation and registration is missing
- Bloated generated code segment such as condition validation or return true/false is not necessary.
I always wondered how the threading worked. I just ran some tests and you're right, it looks like it doesn't actually have threading it just moves things around the stack much like javascript does. Makes a lot more sense this way now.
Ordering seems a little weird. I created three triggers: A, B, and C. All iterate 1 through 3 printing out their letter and then waiting on each iteration.
I ran a few tests to see the order of execution.
A and B execute on map init, C has no event. A calls C with Don't Wait as its first action.
I expected to get CAB CAB CAB .. because A and B would go on the stack from the event, A would execute first, which would call C immediately. C would print then wait (move to end of stack). A would print and wait. Then B would print and wait. C was first moved on to stack so keep iterating in that order...
I did get CAB, but then I got BCA BCA.
If I remove C, I get AB BA BA.
If I do C on map init, I get ABC CBA CBA
If I do C as first action of A with Wait, I get CB BC BC AAA
If I do A and C on init and A calls B with Don't Wait, I get BAC CBA CBA
If I do B and C on init, and C calls A with Don't Wait, I get BAC ACB ACB .. which is the first time that the last thing in the first set wasn't called first in the second set.
Been staring at this awhile now.. what's happening is that things are going onto the stack in reverse order and then maintaining reverse order. The first set is always different because that's executing in operational order.. after that, it executes in function order.
So if you look at the last one, B gets called from map init and prints B, then C gets called from map init and executes A which prints A then C finishes executing to print C .. BAC. BUT the function order was BCA.. which is ACB reversed. If you follow that logic for all of them, it works out. It's almost like it's executing functions from the bottom up, but executing events from top down. Or in other words, event triggers are executing from the top of the stack while everything else executes from the bottom.
TLDR: Don't rely on any set order of execution. If B shouldn't execute until after A, then make A execute B when it's done. Don't rely on events or waits or which function is above the other in the list.
@Mille25: Go You are starting to sway me towards using actions for this scenario.. primarily because I agree that if a trigger doesn't have an event then there's no reason to incur the overhead (whatever it's size) of creating a trigger.. you can simply have your one map init trigger call a few functions. However if I may play devil's advocate to your points... and also perhaps lend a little more information to the topic because I think this has been a really great discussion and I've already learned quite a bit from it...
Quote:
- TriggerExecute() is way slower than just calling a function
- They can check conditions faster than triggers, because the unnecessary negation is missing (Which is used by trigger conditions)
The speed difference here is likely negligible. I don't know how TriggerExecute() works.. it's probably doing a dictionary lookup and calling the associated function, but with proper optimizing, we could be talking nanoseconds. And the difference on a bit-wise operator is even less than that.
Quote:
- The action calls itself contain way less galaxy code than triggers, because the trigger initialisation and registration is missing
- Bloated generated code segment such as condition validation or return true/false is not necessary.
I know this isn't the meaning of your argument, but I want to clarify for anyone else that might read it this way, just because you have less source code doesn't necessarily mean you'll have less or quicker compiled code (and as far as I know, Starcraft 2 does compile the galaxy code). For instance, compare: a += 1; vs a = a + 1; First one is shorter, but these are going to compile to exactly the same thing (or they certainly should if the compiler is any good).
To the meaning of your argument... if we're talking we already have a single map init trigger, that's a given, and the difference between it then executing 5 triggers or calling 5 actions.. we're talking about the triggers adding 5 functions and 5 function calls one time only on map init to set themselves up.. and then on execution, it's 5 more if statements to check if it should run. Then there's 5 trigger objects in memory and TriggerExecute runs 5 times. So given all that.. yeah, that sounds like a good bit more... which is why I said that for this scenario I'm seeing the advantage of using actions over triggers.. even if they do get added to the action list (I saw the comment about hiding them afterwards, but that just seems like a pain if you do ever need to move them to unhide and re-create and re-hide - I would just keep them visible and live with it).
Quote:
- Event properties just need to be read out once, stored in a variable and then passed by parameter, such as "Triggering Unit". All of those event responses are actually functions which are slower to call than reading variables/parameters, so the more often you need to access event properties the more actions will pull ahead.
I thought this was an interesting argument... and one that's very subjective. So I'd like to lay out a bunch of scenarios and discuss the approaches:
If you're using an event property only once, it's actually quicker to just make the single function call then it would be to store the function result to a variable and pass that variable to a function. As you said, the more you use it, the more performance favors the action.. but it goes the other way too - the less you use it, the more performance favors the initial trigger. I could strengthen your argument further as well by removing the middle step, don't store it to a variable, just pass it.
For instance, choose one (pseud-code):
UIDisplayMessage(UnitName(TriggeringUnit()); vs
unit myUnit; myUnit = TriggeringUnit(); PrintUnitName(myUnit); vs
PrintUnitName(myUnit);
Last example is least code, first is going to be the quickest if that's really all you need to do.
HOWEVER, if you do need to do more with the triggering unit, you can still store it to a local variable in the trigger and then use that rather than calling the function again and again.. this would be a very good practice. One should never call the same function twice (whether it be a blizz function or one you make yourself) unless you're reasonably certain the outcome will be different.
In addition, Actions in the Trigger Editor do not allow you to modify params. So if you had a trigger like On Unit Takes Damage and passed that TriggerUnit() to your Action as a param, and then wanted to modify the unit param, you can't. Think of a chain lightning ability done in a trigger where you pass the first target and then the action does the bouncing. Your target will keep changing on each bounce.. but you can't change your target param so you end up having to create a local variable for your target and setting its initial value to your param and then modifying that from there (I actually did this a couple weeks ago). This creates more source code and is (negligibly) worse for performance then if you just had a local variable to work with.
Also consider the case where your action is used multiple times but it's only ever called by a single trigger. Say on unit death you want to award bounty. You wire up the events to your trigger, you grab the dead unit and the killing player and send those over to an action to determine the amount of bounty and award it. Now for that, you've created 3 functions (2 for the trigger + 1 for the action) when you really only needed the single trigger (2 functions)... all other things being equal.
I do like the idea of having a localized place for event handling. I'm not arguing against that. If we take the last example but let's say in addition to awarding bounty on unit death, maybe it's a gladiator type of game so when a unit dies the crowd cheers and maybe a banner enters the screen proclaiming the victor and loser and.. I dunno.. the killing unit does a little dance. So now we have 4 "things" that take place. For organization purposes, I'd make those 4 separate actions all calling from a single trigger.. I really like that idea. The banner and the bounty actions might both need to know the defeated unit so you could put that in a variable and then pass the var rather than call the DyingUnit() function twice. The banner also needs to know the victor so maybe you pass KillingUnit() directly.. maybe you still assign it to a variable for readability and consistency.
If you are worried about performance though, it would still be best to keep the code for them in the trigger since none of those things will happen outside of a unit death event.
So what I'm saying is that whether or not a trigger or action is "best" depends on the circumstances in which it's being used.
Also, I'm trying to point out that the war between performance vs organization is exactly that.. almost anytime you do something to better organize your code, you're going to hurt performance. Function calls have overhead associated with them (if you've ever seen assembly code, functions don't actually take params the way we think of it.. you have to first push all the params onto a stack and then the function takes them down off the stack in a consistent order so the more params you have the more expensive your function call is both in terms of memory and performance).
Technology is reaching a point where I think performance is becoming less and less of a concern. Processors are getting faster. GPUs are taking on some of the work. That's why I say that until I see benchmarks, most of these concerns are negligible. We're not building nuclear reactors here. :) Everything is a case-by-case.
I mentioned that I like the idea of a single place for event handling, but that depends on the situation as well.
Let's imagine a game where some abilities are created using triggers rather than the data editor. So.. we have a game where bounty is awarded on unit death.. but somebody also has a passive ability that each time he kills a unit a small explosion occurs at the location of the corpse to damage nearby enemies.. Corpse Explosion. Now.. do I have a single On Unit Death trigger.. that calls the bounty action and the corpse explosion action (letting the corpse explosion action check the killing unit to see if he has the passive or not) .. or does my On Unit Death trigger call the bounty action and then check to see if it should call the corpse explosion action (segmenting the logic). Or do I have two triggers... one for the bounty action (and any other simple actions that always occur on unit death - ie things generic to the game)... and a seperate trigger grouped with anything else (variables, actions, etc) pertinent to my Corpse Explosion ability.
Personally, I think I like the last scenario here.. I think I like the idea of having every last bit related to my corpse explosion in a single folder that I can then copy/paste into another map if need be. And then if I want to change the eventing for corpse explosion, I look at my corpse explosion ability folder rather than looking through my eventing to see what calls the corpse explosion.
Whew that was alot. Brevity is not my strong suit. I think I hurt my own head writing all of that. :D
I pretty much agree with 100% of the stuff you said, just a couple of comments. :)
Quote:
The speed difference here is likely negligible. I don't know how TriggerExecute() works.. it's probably doing a dictionary lookup and calling the associated function, but with proper optimizing, we could be talking nanoseconds. And the difference on a bit-wise operator is even less than that.
Absolutely. As ive said before, it only really makes a different for very frequently executed code, but I actually had quite some amazing results with stuff like unit takes damage.
Quote:
I know this isn't the meaning of your argument, but I want to clarify for anyone else that might read it this way, just because you have less source code doesn't necessarily mean you'll have less or quicker compiled code (and as far as I know, Starcraft 2 does compile the galaxy code).
In this case I disagree that compile speed is not effected. I cant prove it but im almost certain that compiling a trigger takes longer than compilng an action (When saying "compile" im actually talking about converting them from GUI to Galaxy in this case), simply because a lot more code needs to be generated. The actual galaxy compile time should also be faster, because there is less functions overall. Its not really compareable to i+=1 in this case.
Quote:
...even if they do get added to the action list...
Thats a point that I never thought about, but I dont really think its an argument for either side. Triggers also get added to the trigger value list, even though the action list is obviously used more often i never suffered from it since i ALWAYS just use the search bar. Load times also didnt seem to increase significantly.
Quote:
If you're using an event property only once, it's actually quicker to just make the single function call then it would be to store the function result to a variable and pass that variable to a function. As you said, the more you use it, the more performance favors the action.. but it goes the other way too - the less you use it, the more performance favors the initial trigger.
Thats true, but the reality of it is that in 95-99% of all cases you need event responses multiple times, which I would say is why that argument is sort of invalid, even though theoretically correct. Obviously the event handler concept makes most sense for big projects with alot of code.
Quote:
One should never call the same function twice (whether it be a blizz function or one you make yourself) unless you're reasonably certain the outcome will be different.
This is another reason why calling actions is faster. Lets say you go for the good old trigger approach and have 5 triggers with the same event. Even if you write the event properties into local variables in every trigger, they would still get read 5 times, and combining all triggers into one would simply be an absolute nightmare for organization and in reality not really possible. (Even though it would in theory offer best performance).
Quote:
In addition, Actions in the Trigger Editor do not allow you to modify params.
Thats not entirely correct. If the performance benefit is really needed, you can use custom script to modify parameter values. Im not sure why GUI doesnt support it, but its certainly possible.
I would really be interrested in Blizzards reasoning behind not implementing this into GUI. :)
Quote:
Also consider the case where your action is used multiple times but it's only ever called by a single trigger. Say on unit death you want to award bounty. You wire up the events to your trigger, you grab the dead unit and the killing player and send those over to an action to determine the amount of bounty and award it. Now for that, you've created 3 functions (2 for the trigger + 1 for the action) when you really only needed the single trigger (2 functions)... all other things being equal.
The event handler approach doesnt necessary forbid using code directly in the event handlers, but i mostly just do it for 1 liners which are not called from anywhere else, since otherwise the organization is completly messed up.
Quote:
So what I'm saying is that whether or not a trigger or action is "best" depends on the circumstances in which it's being used.
I agree to 100%, but from my experience in reality calling actions always pays of.
Quote:
Also, I'm trying to point out that the war between performance vs organization is exactly that.. almost anytime you do something to better organize your code, you're going to hurt performance. Function calls have overhead associated with them (if you've ever seen assembly code, functions don't actually take params the way we think of it.. you have to first push all the params onto a stack and then the function takes them down off the stack in a consistent order so the more params you have the more expensive your function call is both in terms of memory and performance).
Again absolutly correct, but as said above pasting all code into the same trigger would be insane and is pretty unrealistic. :D
Quote:
Technology is reaching a point where I think performance is becoming less and less of a concern. Processors are getting faster. GPUs are taking on some of the work. That's why I say that until I see benchmarks, most of these concerns are negligible. We're not building nuclear reactors here. :) Everything is a case-by-case.
Right... Still galaxy is about 1000x slower with arithmetical operations than C + +, and even worse with string handling, and we are still programming games were even today C + + dominates for performance reasons, so while its absolutly true that performance in 95% of the time doesnt matter i would still try to use all possible advantages if they dont require too much work or mess up organisation or readability, and the event handlers are a perfect example for that, IMO.
That said, even SCU doesnt use it perfectly and just for very performance intensive pieces of code, alot of the event handlers are very old and still call triggers because im too lazy to convert all of them into actions and the benefit isnt worth it.
So no, im not saying everyone should start converting his code immediately, its just something to consider for future projekts or additions :D
I agree with; and have tested much of the above. Especially unit takes damage events; which for a short while; I thought would be the end of my project. For the specific case of map init though; at least with the dozen map init triggers I condensed into 1; the run time is not notable. Like, less than .1 second in the debugger window; and my stopwatch couldn't tell the difference.
there are uhhh; what are they called in the editor.. "action blocks" I call them. It just subgroups a list of actions; so you can minimize that list; while dealing with other things. Now; if there was the option to have them all minimized on start; that would be stellar. Sadly, they aren't, so you need to manually click to minimize each one before you can then start messing with the specific things you want to mess with. I am not sure if they have any affect what-so-ever on performance; as it is an editor-only dealie.
Here is a map init question I have; with smart people in the area... Does picking units/players during map init slow the loading down? In wc3, this was a thing. A big thing. You should never pick units or players during map init; due to the way things were loaded into the game. It would attempt to pick a player that didnt exist yet; and freeze up until said player got far enough into the loading process that it recognized them. It could also cause server splits. Not to steal the topic or anything; but do either of you know if this has any affect in sc2?
I am not sure if they have any affect what-so-ever on performance; as it is an editor-only dealie.
No, they do not get converted into galaxy at all, its a purely visual GUI thing.
Quote:
Does picking units/players during map init slow the loading down?
Everything you do in script on map init slows the map loading time down by a very small fraction of a second, but i wouldnt know why picking units/players would be worse than anything else.
I wanted to clarify one thing.. compiled code is machine-level code. I don't know for certain if SC2 compiles to machine-level code, but I'm pretty sure it compiles the galaxy script into something closer to machine-level code so I'll call it machine-level for the sake of simplicity.. it creates some kind of low-level instruction that it can execute much more quickly than it could parse the galaxy script in real-time. In all honesty it's not even fair to call it script. Ctrl+F11 is the "View Script" command but scripts are parsed and executed at run-time which gives them the distinct advantage of being able to do things like having functions as variables and dynamically executing code on-the-fly. Due to how limited galaxy is and due to its similarity to C and its use of references and strongly-typed variables, I'm pretty certain it's not script. Blizzard used Lua script in WoW for the addon interface, and it was extremely powerful and flexible, so I can only imagine the reason they didn't use that here was for performance.. and to get better performance, that means using a language that's compiled.. not a scripting language. But I digress. :)
When you're using the trigger editor, everything you do is converted to galaxy script (or code.. or whatever). You can see it by hitting Ctrl+F11. I believe this is the step you're thinking of as being faster using actions vs triggers? This step doesn't affect game performance at all and is virtually immediate within the editor.
When you save the map, hit Ctrl+F12, or seemingly at random other moments as well, the game will try to compile your code. It's a bigger deal when writing galaxy script manually then when doing trigger stuff... and the editor actually won't save your map if there's any compilation errors. This is the step though that converts all of the galaxy script.. the human-readable code.. into machine-level code. This step could take a little while on really, really large projects and if you get down into it the less galaxy script the faster it would be, but again it's negligible. I've compiled large java projects that take 10-15 minutes to finish. I can't imagine anything in SC2 taking more than a few seconds. I'd be curious to know how long SCU takes. :) There's no indication of when it finishes though.. if there's an error it'll ding and show the error, but if it all works, I think the window just flashes or something.
The comparison of i = i + 1 and i += 1 was to demonstrate that regardless of how you type it, that'll compile into the same machine code. It'll run the same. We can't see that code.. I assume it's in the mpq somewhere but it's binary and we wouldn't understand it.
The parameters, yeah, I have no idea why they don't let you edit them. If you look at the galaxy script, they're simply functions with regular parameters. If you wrote that same function manually, you could edit them. It's not a limiting feature of the language, it's of the GUI. But to your point, while you could do a custom action to modify them, I feel that defeats the purpose of using the GUI. If you're doing work with the Trigger Editor you're able to rename and move things around and it'll update all references and write function declarations for you. Once you start doing custom code, you break that. If you really want to be as performant as possible though, you would write everything in galaxy script manually. Many of the actions in the Trigger Editor are purely wrappers around native functions that add a little more logic (like getting "default" facing) or simply change the order of the parameters. Every time you use these actions, you're suffering the performance penalty of one more function call. So getting into that discussion is a whole other beast. :)
I wanted to clarify one thing.. compiled code is machine-level code. I don't know for certain if SC2 compiles to machine-level code, but I'm pretty sure it compiles the galaxy script into something closer to machine-level code so I'll call it machine-level for the sake of simplicity..
As far as i know Galaxy is an interpreted language and therefore doesnt get compiled any further. Im not aware of any additional files getting created on code compilation within the maps MPQ, and the performace of the language is so bad that it almost HAS to be interpreted. :)
Quote:
Due to how limited galaxy is and due to its similarity to C and its use of references and strongly-typed variables, I'm pretty certain it's not script. Blizzard used Lua script in WoW for the addon interface, and it was extremely powerful and flexible, so I can only imagine the reason they didn't use that here was for performance.. and to get better performance, that means using a language that's compiled.. not a scripting language. But I digress
I think the main reason why the language is so simple is that it would have been to much work to write a more complex interpreter/compiler. Blizzard could not use a "real" language such as LUA because it would probably have been too powerful (Allowing users to actually damage computers, create files etc) and most features would not work well with a GUI interface, which seems to be their desired way of doing stuff.
The editor was designed to be used by beginners and people who have no programming background, which is why they focused on a very simple scripting language which could do basic tasks.
Quote:
When you're using the trigger editor, everything you do is converted to galaxy script (or code.. or whatever). You can see it by hitting Ctrl+F11. I believe this is the step you're thinking of as being faster using actions vs triggers? This step doesn't affect game performance at all and is virtually immediate within the editor.
Im not 100% certain on this, but i think the compilation from GUI to Galaxy only happens when you save the project, or hit view script. When I said compilation time is longer when using triggers instead of actions i was talking about that particular step.
The actual "compilation" of the .galaxy file is probably just a syntax check.
Quote:
This is the step though that converts all of the galaxy script.. the human-readable code.. into machine-level code. This step could take a little while on really, really large projects and if you get down into it the less galaxy script the faster it would be, but again it's negligible.
As ive said above, i dont think a step like that is actually happening. I think a big chunk of time is spent converting GUI to Galaxy. It doesnt really matter though since triggers will always be slower anyway, because there is more code to be compiled and checked.
Quote:
But to your point, while you could do a custom action to modify them, I feel that defeats the purpose of using the GUI. If you're doing work with the Trigger Editor you're able to rename and move things around and it'll update all references and write function declarations for you.
Good point. I was just saying that its theoretically possible if its really needed though. Its nothing once should do frequently. :D
Quote:
Many of the actions in the Trigger Editor are purely wrappers around native functions that add a little more logic (like getting "default" facing) or simply change the order of the parameters. Every time you use these actions, you're suffering the performance penalty of one more function call. So getting into that discussion is a whole other beast.
Yeah, not only that but some of those wrappers are actually quite horribly coded (You can see the source code of all of them in the libs). I also dislike the fact that in GUI there is no way to see if a function is actually a native or a wrapper function, always have to check for the libNtve prefix in galaxy.
I tried googling it.. couldn't really find anything discussing if galaxy is compiled or interpreted. I finally buckled down and got an mpq editor and grabbed a map and copy/pasted a trigger a bunch of times and compared the before and after in the mpq. Not as many files as I expected. And true enough.. the only changed files were the string file holding the names of the triggers and such.. the trigger file that holds the xml that describes the trigger editor data.. and the galaxy file with the code in it.
I'm just flabbergasted.
And the more I think about it... it would be difficult for sc2 to run compiled code for a game, especially given it would need the same code to work on both mac and pc. They'd have to go interpreted... but.. why would they create their own butchered version of a language.
To your concerns as to why they couldn't use LUA.. WoW would've had the same issues... they found a way (and this was many years ago that WoW started) to restrict lua's native lib to a smaller subset of functions.. and they also added a couple of their own in addition to the robust wow api. You couldn't write an addon that could write to files or connect to anything external because all those functions were removed. They limited so that the addon couldn't affect anything outside of the game's UI. It worked great and has obviously stood the test of time.
Since the Trigger Editor simply writes galaxy behind-the-scenes.. I don't see any reason it couldn't simply write lua. It's just a different syntax.
The only thing I can think of is that perhaps they needed a typed language for performance reasons.. there are a crap ton of variable types in galaxy. Maybe they needed something that could be "parsed" and have a reasonable certainty that it wouldn't explode mid-game. I dunno. I'm just fishing. WoW used lua just fine amidst all the inventory and unit frame and other things it had which could've been types but weren't. And it handled errors without blowing up.. SC2 maps still get errors with all of its string functions and what-not. I have to imagine it was a performance concern but given SC2's performance hiccups and relatively large times on even barren maps it's hard to argue that using LUA would've been any worse.
It's also possible that they went the jvm route and they have a just-in-time compiler. That could explain why it takes a map so long to load.. if it's during that stage that they convert the galaxy script into compiled code for the current platform. That is a possibility.. and it makes code optimization even more important if it affects load time.
On a side note.. the xml trigger file is HUGE. In my very tiny demo map, the galaxy script is 5.49 KB. The trigger file is 41.01 KB (7.47 times bigger than galaxy). When I made 12 copies of one of my triggers (of moderate size, 25 lines), it jumped to 32.98 KB galaxy and 420.41 KB triggers (12.75 times bigger than galaxy). In addition the string file jumped from 512 bytes to 4.52 KB. When the galaxy file grew 6 times its size, the trigger file grew over 10 times its size and the strings grew 9 times their size. Even the map file itself (which is compressed) nearly doubled in size from 43 KB to 83 KB.
For further comparison, I took the total 13 copies of that trigger, pasted them into a new custom script "node" in the trigger editor, and saved it. I must've not copied exactly because my resulting galaxy file ended up being 33.19 KB.. a little bigger than the 32.98 of the trigger editor version. But regardless.. my triggers file dropped from over 400 KB back down to just 48.18 (it contains a complete copy of the code wrapped in xml - kind of annoying). And my string file is now 235 bytes.. a little smaller since I removed the original trigger as well and put it into galaxy. Total map size dropped back down to 44 KB.
To take things even further.. I took that custom script node and moved its contents into an external .galaxy file, which I then imported into the map. My trigger file dropped to 9.39 KB since it no longer has the copy of all the code. My string dropped a little to 180 bytes without the node. The compined size of my new imported file and the map script file are about the same as the map script file used to be. My .SC2Map file overall dropped another 3 KB.
I already write most of my stuff in galaxy.. anything complex or using math, at least.. I think I may switch to doing it all in there. The savings on an imported file vs a custom script node aren't huge, but could easily be worthwhile on larger maps. The savings moving from trigger to galaxy seems like just a no-brainer for me after seeing these numbers.
Now I just need a less painful way to write the code in my editor and transfer it into the map. :)
Here is a map init question I have; with smart people in the area... Does picking units/players during map init slow the loading down? In wc3, this was a thing. A big thing. You should never pick units or players during map init; due to the way things were loaded into the game. It would attempt to pick a player that didnt exist yet; and freeze up until said player got far enough into the loading process that it recognized them. It could also cause server splits. Not to steal the topic or anything; but do either of you know if this has any affect in sc2?
I'm not sure. I would hope they'd fix such a thing.
But that said.. since loading screens are such an awful experience (they don't even support chat).. I would suggest to anyone that they take the same route as Aeon of Storms and a couple other maps I've seen recently.. do as little as possible in the map init.. then have a secondary loading phase in-game. This at least gets people off that frozen screen and into an interactive experience as quickly as possible.
Yep, the triggers.xml file can get very very big, in fact, SCUs triggers.xml has a size of almost 80MB. :O
However, when uploading a map and locking it the triggers.xml file will get completly deleted, because its not needed by the game and a missing triggers.xml makes it pretty much impossible to open the map with the normal editor, so you can only edit the galaxy script.
(Just to clarify that the triggers.xml does not increase publish size.)
Hello everyone,
I want to split my initialization trigger in several blocks to clear my code up a bit. I see blizzard maps using both triggers and trigger run for splitting this (StarCraft Master, Aiur Chef, Left 2 Die) and action functions (StarJeweled). I thought action functions would be the best method but now I am not so sure anymore. Would appreciate it if someone could clear this up a bit.
@Truun: Go
I'm not sure I understand what you are asking for, but if you are trying to do something once something else happens, and you want to make it into multiple triggers, you could do this: Assuming you have all of the triggers made, go to the left side and right click the triggers you want to happen after something else happens. Then go down to where it says something like "Initially On" and click that so that it is unchecked. This will allow the trigger not to run until you tell it to. Then you just make if statements in your main trigger to find when something should happen and use the run trigger action to run the trigger.
New to the Editor? Need a tutorial? Click Here
Want data assets? Click Here
@fishy77: Go
That's not really what I meant, but thanks for the information anyways. :)
I'll try to clarify with an example: I have one trigger that is run when the game initializes. That trigger does things like setup the AI, light up monolyth bridges, set up UI, set up revealers, etc. Right now, all those setups are in one trigger but I want to have a separate "block" for each. I want to know whether it's better to seperate it into action functions or triggers.
You can have multiple "map init" triggers. It really doesn't hurt anything. I personally use Action Definitions; because it makes me feel fancy. My init trigger just says to run a dozen action definitions. The biggest problem with this (really the only problem) is that it can run multiple action definitions at once if you have (create new thread) enabled on them; which many people do. It will run the actions side by side. The problem being if you use variabels set from 1 of them in another. It may take a little bit of organizing.
Anywho; with that pre-text: create action definitions and use them in the same way you would create more map init triggers. paste a block of code into one, then in the map init trigger, run the action (whatever you named the action definition will be in the action list).
I am not sure there is a notable difference between the two; I have tested it a bit and there is nothing amazing showing up in the debugger; in terms of run time.
Skype: [email protected] Current Project: Custom Hero Arena! US: battlenet:://starcraft/map/1/263274 EU: battlenet:://starcraft/map/2/186418
@Truun: Go
I would only create actions for things that need to be run multiple times and have different parameter sets (or need to return a value but that's not important in this context).
For example, I might create an action that respawns a hero.. complete with fancy graphics and sounds and camera movement. Heroes die a lot.. this would be ran multiple times. And the parameter of the action would be which hero needs to be respawned... or possibly the player who's hero needs respawning depending on how I set it up.
I think triggers are better suited to the task here. Triggers may or may not run multiple times, but the defining difference is they have no parameters (and they take events but that's not important in this context either). I would create an Init folder and put an Init trigger as the first item then a trigger for each block - one for the dialog setup, one for bridges, one for AI, etc.
Your Init trigger would have the Event: Map Initialization. And all it would do is call Run Trigger on the rest of the triggers.. in the appropriate order that they should be called in.
Your other triggers in the Init folder would have no event and rely on the Init trigger to run them.
If you have a trigger that doesn't depend on anything from any other trigger.. ie, it doesn't matter if the ai is setup before or after the bridges.. then you can say "Don't Wait" when you use the Run Trigger action.
Actions, they are faster and produce less code. No reason to use a trigger without events, except a run action needs to be changed at runtime.
My advice: Define "event handlers", meaning ONE trigger per general event, such as unit dies, and from there execute actions (Passing triggering unit as parameter). Fastest and cleanest way of doing things. Also makes it easy to change order of execution and/or to see what exactly happens from that event on.
Miles, is correct, about event headers. you then use algorithms in the event header to fire off functions or other triggers. Yes new triggers and fuctions are slower but it slows down the game soo little that i wouldnt even worry about it, the organization of code is more important.
Actions are slimmer, I suppose, since an action is simply a function and a trigger is, I believe, 2 functions and a trigger object. I'm not sure the difference is quantifiable though.. haven't come across any benchmarking for sc2 stuff.
I guess my only advantage to using a trigger over an action here is that it doesn't add even more items to the already very lengthy action list.
If you're really concerned with performance, you should keep it all in a single map trigger (or better yet, write directly in galaxy using native functions).
I would prefer triggers in this case but I can understand the argument for actions trying to be a compromise between performance and organization. I'll just say, until I can see benchmarking on it, the difference is likely a fraction of a millisecond on something that's only called once so how much does it really matter. :)
Do what looks best and makes sense to you.
@Mille25: Go
That's seems like a really nice way to handle events. Makes sense not to have two triggers track the same action. I'll start merging events and use that from now on.
For the initialization, I do like the idea that the "initialization blocks" aren't added to the actions list. Is there some way to hide those functions from the list after I've used them?
Check the "Hidden" flag in the action definition.
I don't think it's really running side by side. It's very deterministic in which order the code is executed:
Triggers above other triggers in the code run before these other triggers when their events fire in the same event. Already running threads/triggers are executed before new triggers. Threads started from triggers are executed last. Threads executing a wait(0) will continue to execute their code after all existing threads.
So, actually, threads are executed in order of a list.
In general this is correct, however there are some scenarios where you can really benefit from using action calls, such as "unit gets damage" or other very frequently called events, especially if the actual code is quite fast, too.
Actons are faster for multiple reasons:
- TriggerExecute() is way slower than just calling a function
- They return void.
- They can check conditions faster than triggers, because the unnecessary negation is missing (Which is used by trigger conditions)
- Event properties just need to be read out once, stored in a variable and then passed by parameter, such as "Triggering Unit". All of those event responses are actually functions which are slower to call than reading variables/parameters, so the more often you need to access event properties the more actions will pull ahead.
Also, as mentioned, the event handler approach produces way way less code, because
- Events dont get added to triggers multiple times
- The action calls itself contain way less galaxy code than triggers, because the trigger initialisation and registration is missing
- Bloated generated code segment such as condition validation or return true/false is not necessary.
@Ahli634: Go
I always wondered how the threading worked. I just ran some tests and you're right, it looks like it doesn't actually have threading it just moves things around the stack much like javascript does. Makes a lot more sense this way now.
Ordering seems a little weird. I created three triggers: A, B, and C. All iterate 1 through 3 printing out their letter and then waiting on each iteration.
I ran a few tests to see the order of execution.
A and B execute on map init, C has no event. A calls C with Don't Wait as its first action.
I expected to get CAB CAB CAB .. because A and B would go on the stack from the event, A would execute first, which would call C immediately. C would print then wait (move to end of stack). A would print and wait. Then B would print and wait. C was first moved on to stack so keep iterating in that order...
I did get CAB, but then I got BCA BCA.
If I remove C, I get AB BA BA.
If I do C on map init, I get ABC CBA CBA
If I do C as first action of A with Wait, I get CB BC BC AAA
If I do A and C on init and A calls B with Don't Wait, I get BAC CBA CBA
If I do B and C on init, and C calls A with Don't Wait, I get BAC ACB ACB .. which is the first time that the last thing in the first set wasn't called first in the second set.
Been staring at this awhile now.. what's happening is that things are going onto the stack in reverse order and then maintaining reverse order. The first set is always different because that's executing in operational order.. after that, it executes in function order.
So if you look at the last one, B gets called from map init and prints B, then C gets called from map init and executes A which prints A then C finishes executing to print C .. BAC. BUT the function order was BCA.. which is ACB reversed. If you follow that logic for all of them, it works out. It's almost like it's executing functions from the bottom up, but executing events from top down. Or in other words, event triggers are executing from the top of the stack while everything else executes from the bottom.
TLDR: Don't rely on any set order of execution. If B shouldn't execute until after A, then make A execute B when it's done. Don't rely on events or waits or which function is above the other in the list.
@Mille25: Go You are starting to sway me towards using actions for this scenario.. primarily because I agree that if a trigger doesn't have an event then there's no reason to incur the overhead (whatever it's size) of creating a trigger.. you can simply have your one map init trigger call a few functions. However if I may play devil's advocate to your points... and also perhaps lend a little more information to the topic because I think this has been a really great discussion and I've already learned quite a bit from it...
The speed difference here is likely negligible. I don't know how TriggerExecute() works.. it's probably doing a dictionary lookup and calling the associated function, but with proper optimizing, we could be talking nanoseconds. And the difference on a bit-wise operator is even less than that.
I know this isn't the meaning of your argument, but I want to clarify for anyone else that might read it this way, just because you have less source code doesn't necessarily mean you'll have less or quicker compiled code (and as far as I know, Starcraft 2 does compile the galaxy code). For instance, compare: a += 1; vs a = a + 1; First one is shorter, but these are going to compile to exactly the same thing (or they certainly should if the compiler is any good).
To the meaning of your argument... if we're talking we already have a single map init trigger, that's a given, and the difference between it then executing 5 triggers or calling 5 actions.. we're talking about the triggers adding 5 functions and 5 function calls one time only on map init to set themselves up.. and then on execution, it's 5 more if statements to check if it should run. Then there's 5 trigger objects in memory and TriggerExecute runs 5 times. So given all that.. yeah, that sounds like a good bit more... which is why I said that for this scenario I'm seeing the advantage of using actions over triggers.. even if they do get added to the action list (I saw the comment about hiding them afterwards, but that just seems like a pain if you do ever need to move them to unhide and re-create and re-hide - I would just keep them visible and live with it).
I thought this was an interesting argument... and one that's very subjective. So I'd like to lay out a bunch of scenarios and discuss the approaches:
If you're using an event property only once, it's actually quicker to just make the single function call then it would be to store the function result to a variable and pass that variable to a function. As you said, the more you use it, the more performance favors the action.. but it goes the other way too - the less you use it, the more performance favors the initial trigger. I could strengthen your argument further as well by removing the middle step, don't store it to a variable, just pass it.
For instance, choose one (pseud-code):
UIDisplayMessage(UnitName(TriggeringUnit());
vs
unit myUnit; myUnit = TriggeringUnit(); PrintUnitName(myUnit);
vs
PrintUnitName(myUnit);
Last example is least code, first is going to be the quickest if that's really all you need to do.
HOWEVER, if you do need to do more with the triggering unit, you can still store it to a local variable in the trigger and then use that rather than calling the function again and again.. this would be a very good practice. One should never call the same function twice (whether it be a blizz function or one you make yourself) unless you're reasonably certain the outcome will be different.
In addition, Actions in the Trigger Editor do not allow you to modify params. So if you had a trigger like On Unit Takes Damage and passed that TriggerUnit() to your Action as a param, and then wanted to modify the unit param, you can't. Think of a chain lightning ability done in a trigger where you pass the first target and then the action does the bouncing. Your target will keep changing on each bounce.. but you can't change your target param so you end up having to create a local variable for your target and setting its initial value to your param and then modifying that from there (I actually did this a couple weeks ago). This creates more source code and is (negligibly) worse for performance then if you just had a local variable to work with.
Also consider the case where your action is used multiple times but it's only ever called by a single trigger. Say on unit death you want to award bounty. You wire up the events to your trigger, you grab the dead unit and the killing player and send those over to an action to determine the amount of bounty and award it. Now for that, you've created 3 functions (2 for the trigger + 1 for the action) when you really only needed the single trigger (2 functions)... all other things being equal.
I do like the idea of having a localized place for event handling. I'm not arguing against that. If we take the last example but let's say in addition to awarding bounty on unit death, maybe it's a gladiator type of game so when a unit dies the crowd cheers and maybe a banner enters the screen proclaiming the victor and loser and.. I dunno.. the killing unit does a little dance. So now we have 4 "things" that take place. For organization purposes, I'd make those 4 separate actions all calling from a single trigger.. I really like that idea. The banner and the bounty actions might both need to know the defeated unit so you could put that in a variable and then pass the var rather than call the DyingUnit() function twice. The banner also needs to know the victor so maybe you pass KillingUnit() directly.. maybe you still assign it to a variable for readability and consistency.
If you are worried about performance though, it would still be best to keep the code for them in the trigger since none of those things will happen outside of a unit death event.
So what I'm saying is that whether or not a trigger or action is "best" depends on the circumstances in which it's being used.
Also, I'm trying to point out that the war between performance vs organization is exactly that.. almost anytime you do something to better organize your code, you're going to hurt performance. Function calls have overhead associated with them (if you've ever seen assembly code, functions don't actually take params the way we think of it.. you have to first push all the params onto a stack and then the function takes them down off the stack in a consistent order so the more params you have the more expensive your function call is both in terms of memory and performance).
Technology is reaching a point where I think performance is becoming less and less of a concern. Processors are getting faster. GPUs are taking on some of the work. That's why I say that until I see benchmarks, most of these concerns are negligible. We're not building nuclear reactors here. :) Everything is a case-by-case.
I mentioned that I like the idea of a single place for event handling, but that depends on the situation as well.
Let's imagine a game where some abilities are created using triggers rather than the data editor. So.. we have a game where bounty is awarded on unit death.. but somebody also has a passive ability that each time he kills a unit a small explosion occurs at the location of the corpse to damage nearby enemies.. Corpse Explosion. Now.. do I have a single On Unit Death trigger.. that calls the bounty action and the corpse explosion action (letting the corpse explosion action check the killing unit to see if he has the passive or not) .. or does my On Unit Death trigger call the bounty action and then check to see if it should call the corpse explosion action (segmenting the logic). Or do I have two triggers... one for the bounty action (and any other simple actions that always occur on unit death - ie things generic to the game)... and a seperate trigger grouped with anything else (variables, actions, etc) pertinent to my Corpse Explosion ability.
Personally, I think I like the last scenario here.. I think I like the idea of having every last bit related to my corpse explosion in a single folder that I can then copy/paste into another map if need be. And then if I want to change the eventing for corpse explosion, I look at my corpse explosion ability folder rather than looking through my eventing to see what calls the corpse explosion.
Whew that was alot. Brevity is not my strong suit. I think I hurt my own head writing all of that. :D
I pretty much agree with 100% of the stuff you said, just a couple of comments. :)
Absolutely. As ive said before, it only really makes a different for very frequently executed code, but I actually had quite some amazing results with stuff like unit takes damage.
In this case I disagree that compile speed is not effected. I cant prove it but im almost certain that compiling a trigger takes longer than compilng an action (When saying "compile" im actually talking about converting them from GUI to Galaxy in this case), simply because a lot more code needs to be generated. The actual galaxy compile time should also be faster, because there is less functions overall. Its not really compareable to i+=1 in this case.
Thats a point that I never thought about, but I dont really think its an argument for either side. Triggers also get added to the trigger value list, even though the action list is obviously used more often i never suffered from it since i ALWAYS just use the search bar. Load times also didnt seem to increase significantly.
Thats true, but the reality of it is that in 95-99% of all cases you need event responses multiple times, which I would say is why that argument is sort of invalid, even though theoretically correct. Obviously the event handler concept makes most sense for big projects with alot of code.
This is another reason why calling actions is faster. Lets say you go for the good old trigger approach and have 5 triggers with the same event. Even if you write the event properties into local variables in every trigger, they would still get read 5 times, and combining all triggers into one would simply be an absolute nightmare for organization and in reality not really possible. (Even though it would in theory offer best performance).
Thats not entirely correct. If the performance benefit is really needed, you can use custom script to modify parameter values. Im not sure why GUI doesnt support it, but its certainly possible.
I would really be interrested in Blizzards reasoning behind not implementing this into GUI. :)
The event handler approach doesnt necessary forbid using code directly in the event handlers, but i mostly just do it for 1 liners which are not called from anywhere else, since otherwise the organization is completly messed up.
I agree to 100%, but from my experience in reality calling actions always pays of.
Again absolutly correct, but as said above pasting all code into the same trigger would be insane and is pretty unrealistic. :D
Right... Still galaxy is about 1000x slower with arithmetical operations than C + +, and even worse with string handling, and we are still programming games were even today C + + dominates for performance reasons, so while its absolutly true that performance in 95% of the time doesnt matter i would still try to use all possible advantages if they dont require too much work or mess up organisation or readability, and the event handlers are a perfect example for that, IMO.
That said, even SCU doesnt use it perfectly and just for very performance intensive pieces of code, alot of the event handlers are very old and still call triggers because im too lazy to convert all of them into actions and the benefit isnt worth it.
So no, im not saying everyone should start converting his code immediately, its just something to consider for future projekts or additions :D
I agree with; and have tested much of the above. Especially unit takes damage events; which for a short while; I thought would be the end of my project. For the specific case of map init though; at least with the dozen map init triggers I condensed into 1; the run time is not notable. Like, less than .1 second in the debugger window; and my stopwatch couldn't tell the difference.
there are uhhh; what are they called in the editor.. "action blocks" I call them. It just subgroups a list of actions; so you can minimize that list; while dealing with other things. Now; if there was the option to have them all minimized on start; that would be stellar. Sadly, they aren't, so you need to manually click to minimize each one before you can then start messing with the specific things you want to mess with. I am not sure if they have any affect what-so-ever on performance; as it is an editor-only dealie.
Here is a map init question I have; with smart people in the area... Does picking units/players during map init slow the loading down? In wc3, this was a thing. A big thing. You should never pick units or players during map init; due to the way things were loaded into the game. It would attempt to pick a player that didnt exist yet; and freeze up until said player got far enough into the loading process that it recognized them. It could also cause server splits. Not to steal the topic or anything; but do either of you know if this has any affect in sc2?
Skype: [email protected] Current Project: Custom Hero Arena! US: battlenet:://starcraft/map/1/263274 EU: battlenet:://starcraft/map/2/186418
No, they do not get converted into galaxy at all, its a purely visual GUI thing.
Everything you do in script on map init slows the map loading time down by a very small fraction of a second, but i wouldnt know why picking units/players would be worse than anything else.
@Mille25: Go
Great points :)
I wanted to clarify one thing.. compiled code is machine-level code. I don't know for certain if SC2 compiles to machine-level code, but I'm pretty sure it compiles the galaxy script into something closer to machine-level code so I'll call it machine-level for the sake of simplicity.. it creates some kind of low-level instruction that it can execute much more quickly than it could parse the galaxy script in real-time. In all honesty it's not even fair to call it script. Ctrl+F11 is the "View Script" command but scripts are parsed and executed at run-time which gives them the distinct advantage of being able to do things like having functions as variables and dynamically executing code on-the-fly. Due to how limited galaxy is and due to its similarity to C and its use of references and strongly-typed variables, I'm pretty certain it's not script. Blizzard used Lua script in WoW for the addon interface, and it was extremely powerful and flexible, so I can only imagine the reason they didn't use that here was for performance.. and to get better performance, that means using a language that's compiled.. not a scripting language. But I digress. :)
When you're using the trigger editor, everything you do is converted to galaxy script (or code.. or whatever). You can see it by hitting Ctrl+F11. I believe this is the step you're thinking of as being faster using actions vs triggers? This step doesn't affect game performance at all and is virtually immediate within the editor.
When you save the map, hit Ctrl+F12, or seemingly at random other moments as well, the game will try to compile your code. It's a bigger deal when writing galaxy script manually then when doing trigger stuff... and the editor actually won't save your map if there's any compilation errors. This is the step though that converts all of the galaxy script.. the human-readable code.. into machine-level code. This step could take a little while on really, really large projects and if you get down into it the less galaxy script the faster it would be, but again it's negligible. I've compiled large java projects that take 10-15 minutes to finish. I can't imagine anything in SC2 taking more than a few seconds. I'd be curious to know how long SCU takes. :) There's no indication of when it finishes though.. if there's an error it'll ding and show the error, but if it all works, I think the window just flashes or something.
The comparison of i = i + 1 and i += 1 was to demonstrate that regardless of how you type it, that'll compile into the same machine code. It'll run the same. We can't see that code.. I assume it's in the mpq somewhere but it's binary and we wouldn't understand it.
The parameters, yeah, I have no idea why they don't let you edit them. If you look at the galaxy script, they're simply functions with regular parameters. If you wrote that same function manually, you could edit them. It's not a limiting feature of the language, it's of the GUI. But to your point, while you could do a custom action to modify them, I feel that defeats the purpose of using the GUI. If you're doing work with the Trigger Editor you're able to rename and move things around and it'll update all references and write function declarations for you. Once you start doing custom code, you break that. If you really want to be as performant as possible though, you would write everything in galaxy script manually. Many of the actions in the Trigger Editor are purely wrappers around native functions that add a little more logic (like getting "default" facing) or simply change the order of the parameters. Every time you use these actions, you're suffering the performance penalty of one more function call. So getting into that discussion is a whole other beast. :)
As far as i know Galaxy is an interpreted language and therefore doesnt get compiled any further. Im not aware of any additional files getting created on code compilation within the maps MPQ, and the performace of the language is so bad that it almost HAS to be interpreted. :)
I think the main reason why the language is so simple is that it would have been to much work to write a more complex interpreter/compiler. Blizzard could not use a "real" language such as LUA because it would probably have been too powerful (Allowing users to actually damage computers, create files etc) and most features would not work well with a GUI interface, which seems to be their desired way of doing stuff.
The editor was designed to be used by beginners and people who have no programming background, which is why they focused on a very simple scripting language which could do basic tasks.
Im not 100% certain on this, but i think the compilation from GUI to Galaxy only happens when you save the project, or hit view script. When I said compilation time is longer when using triggers instead of actions i was talking about that particular step.
The actual "compilation" of the .galaxy file is probably just a syntax check.
As ive said above, i dont think a step like that is actually happening. I think a big chunk of time is spent converting GUI to Galaxy. It doesnt really matter though since triggers will always be slower anyway, because there is more code to be compiled and checked.
Good point. I was just saying that its theoretically possible if its really needed though. Its nothing once should do frequently. :D
Yeah, not only that but some of those wrappers are actually quite horribly coded (You can see the source code of all of them in the libs). I also dislike the fact that in GUI there is no way to see if a function is actually a native or a wrapper function, always have to check for the libNtve prefix in galaxy.
@Mille25: Go
Man am I ever disillusioned...
I tried googling it.. couldn't really find anything discussing if galaxy is compiled or interpreted. I finally buckled down and got an mpq editor and grabbed a map and copy/pasted a trigger a bunch of times and compared the before and after in the mpq. Not as many files as I expected. And true enough.. the only changed files were the string file holding the names of the triggers and such.. the trigger file that holds the xml that describes the trigger editor data.. and the galaxy file with the code in it.
I'm just flabbergasted.
And the more I think about it... it would be difficult for sc2 to run compiled code for a game, especially given it would need the same code to work on both mac and pc. They'd have to go interpreted... but.. why would they create their own butchered version of a language.
To your concerns as to why they couldn't use LUA.. WoW would've had the same issues... they found a way (and this was many years ago that WoW started) to restrict lua's native lib to a smaller subset of functions.. and they also added a couple of their own in addition to the robust wow api. You couldn't write an addon that could write to files or connect to anything external because all those functions were removed. They limited so that the addon couldn't affect anything outside of the game's UI. It worked great and has obviously stood the test of time.
Since the Trigger Editor simply writes galaxy behind-the-scenes.. I don't see any reason it couldn't simply write lua. It's just a different syntax.
The only thing I can think of is that perhaps they needed a typed language for performance reasons.. there are a crap ton of variable types in galaxy. Maybe they needed something that could be "parsed" and have a reasonable certainty that it wouldn't explode mid-game. I dunno. I'm just fishing. WoW used lua just fine amidst all the inventory and unit frame and other things it had which could've been types but weren't. And it handled errors without blowing up.. SC2 maps still get errors with all of its string functions and what-not. I have to imagine it was a performance concern but given SC2's performance hiccups and relatively large times on even barren maps it's hard to argue that using LUA would've been any worse.
It's also possible that they went the jvm route and they have a just-in-time compiler. That could explain why it takes a map so long to load.. if it's during that stage that they convert the galaxy script into compiled code for the current platform. That is a possibility.. and it makes code optimization even more important if it affects load time.
On a side note.. the xml trigger file is HUGE. In my very tiny demo map, the galaxy script is 5.49 KB. The trigger file is 41.01 KB (7.47 times bigger than galaxy). When I made 12 copies of one of my triggers (of moderate size, 25 lines), it jumped to 32.98 KB galaxy and 420.41 KB triggers (12.75 times bigger than galaxy). In addition the string file jumped from 512 bytes to 4.52 KB. When the galaxy file grew 6 times its size, the trigger file grew over 10 times its size and the strings grew 9 times their size. Even the map file itself (which is compressed) nearly doubled in size from 43 KB to 83 KB.
For further comparison, I took the total 13 copies of that trigger, pasted them into a new custom script "node" in the trigger editor, and saved it. I must've not copied exactly because my resulting galaxy file ended up being 33.19 KB.. a little bigger than the 32.98 of the trigger editor version. But regardless.. my triggers file dropped from over 400 KB back down to just 48.18 (it contains a complete copy of the code wrapped in xml - kind of annoying). And my string file is now 235 bytes.. a little smaller since I removed the original trigger as well and put it into galaxy. Total map size dropped back down to 44 KB.
To take things even further.. I took that custom script node and moved its contents into an external .galaxy file, which I then imported into the map. My trigger file dropped to 9.39 KB since it no longer has the copy of all the code. My string dropped a little to 180 bytes without the node. The compined size of my new imported file and the map script file are about the same as the map script file used to be. My .SC2Map file overall dropped another 3 KB.
I already write most of my stuff in galaxy.. anything complex or using math, at least.. I think I may switch to doing it all in there. The savings on an imported file vs a custom script node aren't huge, but could easily be worthwhile on larger maps. The savings moving from trigger to galaxy seems like just a no-brainer for me after seeing these numbers.
Now I just need a less painful way to write the code in my editor and transfer it into the map. :)
I'm not sure. I would hope they'd fix such a thing.
But that said.. since loading screens are such an awful experience (they don't even support chat).. I would suggest to anyone that they take the same route as Aeon of Storms and a couple other maps I've seen recently.. do as little as possible in the map init.. then have a secondary loading phase in-game. This at least gets people off that frozen screen and into an interactive experience as quickly as possible.
Yep, the triggers.xml file can get very very big, in fact, SCUs triggers.xml has a size of almost 80MB. :O
However, when uploading a map and locking it the triggers.xml file will get completly deleted, because its not needed by the game and a missing triggers.xml makes it pretty much impossible to open the map with the normal editor, so you can only edit the galaxy script.
(Just to clarify that the triggers.xml does not increase publish size.)