I'm working with some pretty complicated triggers that exceed the maximum runtime for a trigger (which according to my testing is about 175ms).
I've discovered that the Wait() command sleeps a trigger and that the trigger duration is not saved over context switches. So by inserting Wait( 0.0, c_gameTime ) into a trigger, it can essentially be broken up to fit into intervals that do not exceed the trigger limit.
This raises an interesting question though - when playing a game online, is the trigger execution time determined by the slowest person's computer? The machine I test on is quite fast and if my map is played by someone with a much slower computer, the way I am splitting my trigger may not work.
In case you are wondering, I am working on a map with a hexagonal game grid where the movement costs for units can vary depending on the grid cell, so a shortest path problem needs to be solved every time a unit is selected for movement. I haven't implemented a priority queue yet so the current implementation I'm working off of is O(n^2) (yuck). To make matters worse, for the maximum range of a unit in my game (7) this turns into n = 169 hexagonal grid cells. Right now the slowest part of the procedure is building the n by n array of weights between the nodes.
Edit: So far I've tested on two computers, a 3.4ghz quad core phenom and a 2.4ghz dual core turion and the runtimes were exactly the same - need to find someone with a really slow computer to see if this trick won't work in all cases.
In multiplayer the slowest computer should lag the game to catch up to the others and not cancel prematurely.
I think (tho this is only a guess) that it's actually an execution limit and not a time limit. There can be x number of operations before the thread gets canceled - and this happened to take 175ms for you.
Anyway, I have a 3.4 GhZ single core here, but it doesn't have internet access so I can't test it on this :<
I wouldn't worry too much about this. I highly doubt Galaxy script will make us of more than one cpu core. And it's a rather large improvement over say, JASS. While you may want to optimize your runtime (as you already know), I don't see it causing any problems. Worst case, Battle.net's just going to tell the other players that the slow computer is lagging and the gamestate sync will take care of everything.
The message player xxx has slowed the game will probably pop up, bnet will make sure that all player are perfectly in synch, else you will get dropped (when your connection is not actually lost but slow responding)
Naw, actually all triggers are run at all clients (players' computers) at the same time.
So even if there's a trigger that only effects player 1, all other players will run this trigger too and remember the changes.
That way they don't have to send info on everything that happens through the net, because every player always calculates what happens.
But when you're playing online and a player lags behind (e.g. because he alt-tabs or his pc is slow) then the player's machine cannot send all responses to the server. The server notices this and tells all players to wait for the lagging player. If the lag isn't resolved after 45 (?) seconds then bye bye.
So the host makes sure that all clients stay in line, but the host won't run the triggers for them.
He's making a path finding function. This needs to be recalculated everytime because the position of the unit and possible obstacles aren't always at the same place (the 7 range is only a radius around the unit, not the full playable map).
@crutex: Go
169 nodes is small in reality but it's more actions per trigger than starcraft wants to deal with. I was a little misled because I was testing with the trigger debugger which slows everything down a little bit as well (especially if you have debug output).
Do you know that there exists point functions that should (at least when reading their names, haven't used them yet) return things you may need.
I'm not sure about the function name right now, but this one function returns the "pathingcost" (?) between two points. So have a try if this is the distance between the two points or some value which can be used as the distance. It's the same problem like we had in WC3 scripting, after several iterations you simply exceed the operation limit and you would need to use a wait.
I did some testing on the operation limit in WC3 and found a formula where you could simply calculate how often a loop will be executed in its own thread until the game breaks it down. As a basic you can say that more function calls, variable references and operations (like +-*/ && ||) drastically decrease the limit. Will post more details later if someone wants to know this. :D
@Rushhour: Go
I'm very interested in it.
It might push more people to write Galaxy code directly rather than just triggers, if only to put more code into their maps more easily.
Allright then; the following is valid for WC3, not yet tested if SC2 uses a similiar system!
The number of executions of a loop can be calculated with following formula: n(x)= (300,000 / (2x+1) ) with n being the number of executions and x being what I call "ops".
That would mean that an empty loop in an own thread will be executed 300,000 times until it breaks down. This can't be proved since inputting a counting variable will influence the result.
That's the basic setting. This would return a value of 42857. Seems pretty random, but it follows some logic. When looking at the formula this would be the rounded outcome for x=3.
I did various tests and discovered following things:
set TestI=TestI+1 increases x (the "ops") by 3. one for referencing the variable TestI, one for using an calculation operator (+) and one for referencing the number 1.
When I try the following:
set TestI=TestI+1
set someInt=someInt+8436
I get 23077 as a result. Again, this is the rounded outcome for x=6, again the same explanation: Two variable or number references and the +.
When using:
set TestI=TestI+1
set someInt=someInt
set someInt=someInt
set otherInt=otherInt
I get the same result as above, again 6 ops referenced!
Ok thats the basics.
When you want to know how many times your loop will get executed you need to simply count all these things, but there are more things to keep in mind:
A native function reference itself adds 0,5. If it has a parameter and you use some variable or value you will need to add 1 for this. If you do some math like "var + 5" in the parameter, you will need to add 3.
If you use an own function you will need to add 1 for the function itself. Same like for natives is valid for the parameters, but: You will need to count the stuff done inside the function as if it was in the loop.
A completly empty if-then-else would add 0,5 (but noone ever has this), so you need to add 1,5 for an if-then-else which has something in it. And: Only the code reached by the loop will count! If the outcome is random then you would need to guess or add the highest possibility to your x.
If there is a loop inside your loop, you will need to count all stuff that is done inside of your nested loop like you did before and then multiply it with the number of executions for the inner loop.
So having a lot of if - then -elses, inner loops that have different number of executions makes it really hard to get the actual number of maximum executions of your loop.
But as a general note you could remember that instead of calculationg myVar+CONST 4 times, it would be better to save it in another variable. Or it would be better to use natives instead of own functions.
I did this for a PathExists function which then worked pretty fast but which still needs some Waits after XY executions to make sure it doesn't suddenly stop. ;)
Let's see if I explained clearly :D, how often will the following code be executed?
Do you know that there exists point functions that should (at least when reading their names, haven't used them yet) return things you may need.
I'm not sure about the function name right now, but this one function returns the "pathingcost" (?) between two points. So have a try if this is the distance between the two points or some value which can be used as the distance. It's the same problem like we had in WC3 scripting, after several iterations you simply exceed the operation limit and you would need to use a wait.
I did some testing on the operation limit in WC3 and found a formula where you could simply calculate how often a loop will be executed in its own thread until the game breaks it down. As a basic you can say that more function calls, variable references and operations (like +-*/ ||) drastically decrease the limit. Will post more details later if someone wants to know this. :D
I'm aware of the function but it doesn't help the situation I described because the movement costs are customized and not determined by the actual in game terrain. Also that function will not constrict itself to operate on a very specific grid and instead will find return the cost that starcraft determines is the fastest path between two points.
Rollback Post to RevisionRollBack
To post a comment, please login or register a new account.
I'm working with some pretty complicated triggers that exceed the maximum runtime for a trigger (which according to my testing is about 175ms).
I've discovered that the Wait() command sleeps a trigger and that the trigger duration is not saved over context switches. So by inserting Wait( 0.0, c_gameTime ) into a trigger, it can essentially be broken up to fit into intervals that do not exceed the trigger limit.
This raises an interesting question though - when playing a game online, is the trigger execution time determined by the slowest person's computer? The machine I test on is quite fast and if my map is played by someone with a much slower computer, the way I am splitting my trigger may not work.
In case you are wondering, I am working on a map with a hexagonal game grid where the movement costs for units can vary depending on the grid cell, so a shortest path problem needs to be solved every time a unit is selected for movement. I haven't implemented a priority queue yet so the current implementation I'm working off of is O(n^2) (yuck). To make matters worse, for the maximum range of a unit in my game (7) this turns into n = 169 hexagonal grid cells. Right now the slowest part of the procedure is building the n by n array of weights between the nodes.
Edit: So far I've tested on two computers, a 3.4ghz quad core phenom and a 2.4ghz dual core turion and the runtimes were exactly the same - need to find someone with a really slow computer to see if this trick won't work in all cases.
In multiplayer the slowest computer should lag the game to catch up to the others and not cancel prematurely.
I think (tho this is only a guess) that it's actually an execution limit and not a time limit. There can be x number of operations before the thread gets canceled - and this happened to take 175ms for you.
Anyway, I have a 3.4 GhZ single core here, but it doesn't have internet access so I can't test it on this :<
I wouldn't worry too much about this. I highly doubt Galaxy script will make us of more than one cpu core. And it's a rather large improvement over say, JASS. While you may want to optimize your runtime (as you already know), I don't see it causing any problems. Worst case, Battle.net's just going to tell the other players that the slow computer is lagging and the gamestate sync will take care of everything.
@MotiveMe:
The message player xxx has slowed the game will probably pop up, bnet will make sure that all player are perfectly in synch, else you will get dropped (when your connection is not actually lost but slow responding)
So wait, this is actually bnet doing not the player ? Or ?
@tigerija: Go
I do believe when playing on bnet the triggers are run on the B.net server.
When you test its done locally.
@SouLCarveRR: Go
Naw, actually all triggers are run at all clients (players' computers) at the same time.
So even if there's a trigger that only effects player 1, all other players will run this trigger too and remember the changes.
That way they don't have to send info on everything that happens through the net, because every player always calculates what happens.
But when you're playing online and a player lags behind (e.g. because he alt-tabs or his pc is slow) then the player's machine cannot send all responses to the server. The server notices this and tells all players to wait for the lagging player. If the lag isn't resolved after 45 (?) seconds then bye bye.
So the host makes sure that all clients stay in line, but the host won't run the triggers for them.
Did you say you are rebuilding it every time a unit is selected? No need o.o 169 nodes is so small too
@crutex: Go
He's making a path finding function. This needs to be recalculated everytime because the position of the unit and possible obstacles aren't always at the same place (the 7 range is only a radius around the unit, not the full playable map).
@crutex: Go 169 nodes is small in reality but it's more actions per trigger than starcraft wants to deal with. I was a little misled because I was testing with the trigger debugger which slows everything down a little bit as well (especially if you have debug output).
Anyway, I got it working, it looks like this:
@AzothHyjal: Go
That looks really cool, I hope you get it working to your satisfaction :)
Yay, a map that uses Andromeda!
Do you know that there exists point functions that should (at least when reading their names, haven't used them yet) return things you may need.
I'm not sure about the function name right now, but this one function returns the "pathingcost" (?) between two points. So have a try if this is the distance between the two points or some value which can be used as the distance. It's the same problem like we had in WC3 scripting, after several iterations you simply exceed the operation limit and you would need to use a wait.
I did some testing on the operation limit in WC3 and found a formula where you could simply calculate how often a loop will be executed in its own thread until the game breaks it down. As a basic you can say that more function calls, variable references and operations (like +-*/ && ||) drastically decrease the limit. Will post more details later if someone wants to know this. :D
@Rushhour: Go I'm very interested in it. It might push more people to write Galaxy code directly rather than just triggers, if only to put more code into their maps more easily.
Allright then; the following is valid for WC3, not yet tested if SC2 uses a similiar system!
The number of executions of a loop can be calculated with following formula: n(x)= (300,000 / (2x+1) ) with n being the number of executions and x being what I call "ops". That would mean that an empty loop in an own thread will be executed 300,000 times until it breaks down. This can't be proved since inputting a counting variable will influence the result.
My testing function looked like this:
That's the basic setting. This would return a value of 42857. Seems pretty random, but it follows some logic. When looking at the formula this would be the rounded outcome for x=3. I did various tests and discovered following things: set TestI=TestI+1 increases x (the "ops") by 3. one for referencing the variable TestI, one for using an calculation operator (+) and one for referencing the number 1.
When I try the following: set TestI=TestI+1 set someInt=someInt+8436 I get 23077 as a result. Again, this is the rounded outcome for x=6, again the same explanation: Two variable or number references and the +.
When using: set TestI=TestI+1 set someInt=someInt set someInt=someInt set otherInt=otherInt I get the same result as above, again 6 ops referenced!
Ok thats the basics. When you want to know how many times your loop will get executed you need to simply count all these things, but there are more things to keep in mind: A native function reference itself adds 0,5. If it has a parameter and you use some variable or value you will need to add 1 for this. If you do some math like "var + 5" in the parameter, you will need to add 3. If you use an own function you will need to add 1 for the function itself. Same like for natives is valid for the parameters, but: You will need to count the stuff done inside the function as if it was in the loop. A completly empty if-then-else would add 0,5 (but noone ever has this), so you need to add 1,5 for an if-then-else which has something in it. And: Only the code reached by the loop will count! If the outcome is random then you would need to guess or add the highest possibility to your x. If there is a loop inside your loop, you will need to count all stuff that is done inside of your nested loop like you did before and then multiply it with the number of executions for the inner loop.
So having a lot of if - then -elses, inner loops that have different number of executions makes it really hard to get the actual number of maximum executions of your loop. But as a general note you could remember that instead of calculationg myVar+CONST 4 times, it would be better to save it in another variable. Or it would be better to use natives instead of own functions.
I did this for a PathExists function which then worked pretty fast but which still needs some Waits after XY executions to make sure it doesn't suddenly stop. ;)
Let's see if I explained clearly :D, how often will the following code be executed?
I'm aware of the function but it doesn't help the situation I described because the movement costs are customized and not determined by the actual in game terrain. Also that function will not constrict itself to operate on a very specific grid and instead will find return the cost that starcraft determines is the fastest path between two points.