Just wanted to share with you guys, how I have fun in Unity.
I wanted to create something physically realistic, and figured out that Unity has a class to access GPU and use it as CPU. Videocard is full of slow processor units, but there's alot of them. So, some kinds of calculations benefit greatly from being computed inside gpu.
So, I simulated a ground-kind of mater with lots of particles. There's around 30K of them. They physically interact with each other. And form a soil. And then I explode it. Now as it works, I'm thinking of remaking an old Scorched Earth game with this material.
Videocard is full of slow processor units, but there's alot of them.
if i remember right the GPU processors are ultra-fast but "stupid" (less instructions) and CPU is slow but "smart" (x86 is a bigger instruction set) ?
GPU processors are better when you need to do a ton of simple arithmetic operations. I'm not the pro at computer science but i assume the larger instruction set makes it so CPU spend more time to process the incoming data?
Guys, if you have a videocard that supports shader model 5.0, you may be interested in checking this little program I wrote out of curiosity, to measure, how fast mandelbrot fractal will be rendered on gpu. It does indeed being rendered pretty fast, so it feels like exploring it in realtime. The only problem is that on bigger zooms it reaches the precision of double float, and everything becomes pixelated. But it's a pretty deep zoom, you'll see the pixels on 10^-15 scale.
Sounds like Unity has a wrapper for OpenCL/CUDA, which ends up doing that or PhysX.
Yes, and HLSL is used as a language, so practically there's not much difference from a normal C#, I just can't run functions from functions. How easily the task is parallelized is important, luckily most stuff I play with is parallelized perfectly. The most problems I had with was shared access to data, when it's critical which parallel process first reads and writes data. But hey have methods for atomic operations which fix this issue.
Cpu does have wider instruction set, but its "smartness" may be related to its ability to perform even complex operations during only one tick. If I remember correctly, one "add" operation takes one cycle, but one "multiply" operation takes a number of "add" operations. But cpu may have additional shit to multiply in one cycle. But basically CPU core runs 10-20 times faster than gpu core. But gpu has hundreds or thousands of cores.
Gpu instruction set affects coding habits only slightly. And yes, it's best for massive math.
The primary difference for CPU/GPU is their approach to decision making.
CPU's have pipelines, out of order execution and branch prediction. As you can imagine, that is great for sequential instructions and terrible for parallelization. As a result, branching basically crushes GPU performance, it drops off a cliff.
It's a serious project that I invested lots of time into, and plan to complete the game and release it on Steam. But look at this, how can this not be fun:
lol, now if we only can change the purple tank into yellow, we'd have a hint at the old game Tank (or battle fortress?) that i played on sega megadrive as a toddler xD it had a pretty cool enviroment destruction system for its time
Ok, after 10 months of a rather lazy but still enthusiastic and sometimes obsessed development, I finally released it to Steam. It's an early access version, unfinished and possibly buggy, but at least people can experience how is it to play an arcade game in a physical simulation.
Just wanted to share with you guys, how I have fun in Unity.
I wanted to create something physically realistic, and figured out that Unity has a class to access GPU and use it as CPU. Videocard is full of slow processor units, but there's alot of them. So, some kinds of calculations benefit greatly from being computed inside gpu.
So, I simulated a ground-kind of mater with lots of particles. There's around 30K of them. They physically interact with each other. And form a soil. And then I explode it. Now as it works, I'm thinking of remaking an old Scorched Earth game with this material.
Sounds like Unity has a wrapper for OpenCL/CUDA, which ends up doing that or PhysX.
Note, it doesn't use it like a CPU, it is solely for stuff that GPU's can handle well, that are easily paralleized and have no branching.
if i remember right the GPU processors are ultra-fast but "stupid" (less instructions) and CPU is slow but "smart" (x86 is a bigger instruction set) ?
GPU processors are better when you need to do a ton of simple arithmetic operations. I'm not the pro at computer science but i assume the larger instruction set makes it so CPU spend more time to process the incoming data?
would be cool to see a "Worms" artillery style game using this kind of terrain as well
Formally Kinkycactus
Guys, if you have a videocard that supports shader model 5.0, you may be interested in checking this little program I wrote out of curiosity, to measure, how fast mandelbrot fractal will be rendered on gpu. It does indeed being rendered pretty fast, so it feels like exploring it in realtime. The only problem is that on bigger zooms it reaches the precision of double float, and everything becomes pixelated. But it's a pretty deep zoom, you'll see the pixels on 10^-15 scale.
It's a 64 bit windows build.
Yes, and HLSL is used as a language, so practically there's not much difference from a normal C#, I just can't run functions from functions. How easily the task is parallelized is important, luckily most stuff I play with is parallelized perfectly. The most problems I had with was shared access to data, when it's critical which parallel process first reads and writes data. But hey have methods for atomic operations which fix this issue.
@abvdzh: Go
Cpu does have wider instruction set, but its "smartness" may be related to its ability to perform even complex operations during only one tick. If I remember correctly, one "add" operation takes one cycle, but one "multiply" operation takes a number of "add" operations. But cpu may have additional shit to multiply in one cycle. But basically CPU core runs 10-20 times faster than gpu core. But gpu has hundreds or thousands of cores.
Gpu instruction set affects coding habits only slightly. And yes, it's best for massive math.
@ZombieZasz: Go
Yea. Though Worms are a remake of Scorched Earth. They just didn't make the ground crumble down after explosions.
The primary difference for CPU/GPU is their approach to decision making.
CPU's have pipelines, out of order execution and branch prediction. As you can imagine, that is great for sequential instructions and terrible for parallelization. As a result, branching basically crushes GPU performance, it drops off a cliff.
Ultimately it really comes down to Amdahls law.
I used some particles to create a controllable tank, so now it's a playable prototype, Scorched Earth remake.
Thats pretty cool. I've tinkered with unity before. So to clarify, are you working on a Unity project or just having fun messing around?
@QueenGambit: Go
Both, working on a project, and having fun.
It's a serious project that I invested lots of time into, and plan to complete the game and release it on Steam. But look at this, how can this not be fun:
I've uploaded the game to Steam Greenlight. So, if anyone has Steam account, I'd be grateful for the vote.
Here's a new video, showcasing gameplay process:
That looks cool as hell. Awesome job. Also impressed by the tank arsenel on those tanks.
I keep working on the game. It has been greenlit. Hope to release it in a month.
lol, now if we only can change the purple tank into yellow, we'd have a hint at the old game Tank (or battle fortress?) that i played on sega megadrive as a toddler xD it had a pretty cool enviroment destruction system for its time
Adun Toridas :)
Awesome. :)
Ok, after 10 months of a rather lazy but still enthusiastic and sometimes obsessed development, I finally released it to Steam. It's an early access version, unfinished and possibly buggy, but at least people can experience how is it to play an arcade game in a physical simulation.
Steam link