... snip ... I think your suggestion/description above (and thus script coding) is actually wrong about those pos/rot/scale values that are attached to the bones...
Your suggestion (and script) 'combines' (multiplies) those values with the IREF (bind) matrices, to come up with a 'base pose'... I'm almost positive that you should not be combining them with the IREF matrices - they simply make up a 'local' bone matrix - just like any other keyframe data does. If you think about how the (your) exporter script works, you'll come to the same conclusion, I think. It simply converts 'global' bone matrices into 'local' bone matrices, by removing the parent matrix (multiplying by the inverse of the parent). If you need to convert them back to global matrices, they should be multiplied by the parent bone values (not the IREF matrices). ...snip...
The above quote (from me) is either misleading or just wrong on some points... partly due to my own confusion and/or not tracking exactly what your script was doing (and what the conditions were at the time, etc). I don't know much about Max - let alone Max script - so I'm having to read up on it as I go and/or make some assumptions. There's also some confusion (on my part, or at least my description is sometimes confusing) related to Local vs Global matrices/transforms.
Anyway, what I said about the bone transform values (pos/rot/scale in the bone structure, that you use to make a 'base' pose) is correct - they make up essentially the exact same pose as the first frame of the first animation (which makes me wonder why they exist at all). The part that I mis-stated is when I suggested that it was wrong to combine those values with the IREF matrices - I hereby take that back :) - your script is doing that correctly.
The confusion is/was due to several factors...
as mentioned, there was some confusion about whether you were creating Local or Global matrices for the poses/animation keys. In fact, you appear to be mixing methods to come up with the 3 needed key types - translation and scale use Local (in coordsys parent) and rotations are computed by setting the bone's 'transform' value, which appears to be a Global (in coordsys world) operation.
combined with the above, your function named "M3I_Bones_Bindpose" is actually setting the 'base' pose :) (that likely changed at some point during development). Nothing wrong with that.. it's just confused me from time to time.
after reviewing your script again, it appears that you have the IREF (bind) pose set prior to setting up keyframes - or the base pose... contrary to my previous reply (quoted above), I now see/agree that that is correct, for what you are doing (ie. you already _are_ treating the bone values the same way you do the keyframe data).
...aside from the axis-swapping that I need to do for Cinema 4D, there are other differences that I need to handle or keep in mind that probably also add to my confusing explanations. For example, when I'm creating keyframes, I always need 'Local' (in coordsys parent) transforms... and that's actually how the data (rotations, in particular) is supplied already, so *I* don't need to combine the rotations with the IREF matrices, but for what you're doing, that's correct, etc. Frankly, I'm still confused why you need to do what you're doing with rotations - I would think that you could just set the .rotation values (in coordsys parent) instead of doing the .transform thing, but from your comments in the script, apparently you had some trouble with that.
Anyway, sorry for rambling - I just wanted to try to clarify that somewhat.
...combined with the above, your function named "M3I_Bones_Bindpose" is actually setting the 'base' pose :) (that likely changed at some point during development). Nothing wrong with that.. it's just confused me from time to time...
Speaking of the above... again, I don't know exactly how Max works, but it appears that when your script calls "M3I_Create_Skin", the "current frame" is still set at the "base pose" frame (frame 1) and not the "bind pose" frame (frame 0) - I didn't see anything between the call to "M3I_Bones_Poses" and creating the skin that changed the current frame. If it makes a difference, I think you want to use the bind-pose frame to skin the meshes (ie. set to the IREF matrices).
In 3dsmax, there's dialog options under Hierarchy -> Link Info where you can turn off scaling inheritance. So, for example, if you had a sphere object that was the parent of a cube object, and you added some scaling to the sphere to make it look like an ellipsoid, you wouldn't want that nasty scaling to pass down to the cube.
It seems this inheritance information is also store in the M3 BONE chunk as flags. I was wondering if anyone has found any models that make use of these flags, such as not passing along scale? If so, what M3 models?
That's an interesting question... and (as you mentioned) the bones do have a 'flags' member that would seem to include flag bits for the above (see: http://code.google.com/p/libm3/wiki/BONE ), but interestingly, those flags don't seem to be set on the models I've looked at so far - at least I haven't noticed any. I'm currently logging lots of info when reading .m3 files, so I get something similar to the following for each bone...
...(those are my own internal variable names and in some cases user-import-scaling (and axis-swapping) has been applied to the logged values to make it easier for me to debug/compare to what I'm seeing in the app).
So far, I'm mostly seeing the UNKNOWN (bit 14) flag set on most bones (unless the m_flags == 0) and then ANIMATED for bones that have animation data and SKINNED for any that are involved in mesh skinning, but I'm not seeing any other flags set (though I'm sure one or both of the BILLBOARD flags are used on some models).
The lower two bytes for the bone flags are always zero, so I guess it's not important as I thought.
It's probably an old deprecated feature that's no longer being used.
BTW, unrelated to anything above, but I added some additional bone info tracking and noticed that the value before the Name (that everyone is referring to as 'd1') is NOT always -1... here's a bone out of the Zergling:
...I added code to prepend a bunch of asterix if the value was not -1, to make it easier to find them... note the value is '6' in the above bone. In that same mesh, I also found values of 5, 8, 9 and 0. Kinda seems like some index value, but I have no idea what it's used for - just thought I'd pass that info along while I was thinking about it. So far, s1, d2, d3 and d4 all seem to just be 0.
Regarding the normal map, I sort of multiplied the blue channel over the green and red for the defiler. Somehow it works. Though, I'm not sure if this is how it should be done, or how Blizzard does their normal maps.
Rollback Post to RevisionRollBack
Whatever you do, wholeheartedly, moment by heartfelt moment, becomes a tool for the expression of your very soul.
Glazing over the technical bits that don't make much sense to me, were any of the issues I spoke of before ever addressed? Smoothing groups not working on multi object materials (only applying to the first texture and not the others) and such?
Regarding the normal map, I sort of multiplied the blue channel over the green and red for the defiler. Somehow it works. Though, I'm not sure if this is how it should be done, or how Blizzard does their normal maps.
I'm not sure I'm following that, but you might find my post at the bottom of this thread of interest, regarding how SCII normal maps are stored in .dds files.
Note that the blue (Z) value will be 're-computed' by the app after loading the .dds file (based on the red/green channels), so what you describe above would be... a bit odd :).
I apologize. I don't understand. What I do with normal maps is copy red and paste to alpha, turn alpha into white, turn blue into black, save as. I use photoshop plugin.
Do you mean we need not do this and instead just save using the dxt5nm compression?
Regarding my post, this is the way I got to "fix" the inverted normal map at the mirrored geometry. I'm talking about an actual model in starcraft2 having inverted normal maps on the other side of a mirrored geometry.
Edit:
Phygit, man, thank you so much. The normal map is now fixed. At least for the defiler. I'll test the other ones later.
What I did was save a non-edited normal map (added in a black alpha channel) using Dxt5NM. This created rgba. Then I changed red to white and blue to black. Made sure the bumps are inverted and kaboom!
So... saving in DXT5n (or DXT5nm) didn't automatically swap the channel data around for you? I hadn't tried any of nVidia's tools or plugins - but my own plugin (for Cinema 4D) does the swapping for you.
Anyway, it sounds like you figured it out, so I'm happy to hear that my post was useful :).
It did create rgba grays. Each of these channels somehow have changed. They've somehow become mixed, though I'm not sure how or what. Saving though did not make the red channel white and blue into black. That I had to manually make. However, the alpha channel contains red and something mixed into it. This goes for the green as well.
It worked for the defiler. I'm testing it for the other models later.
I've always wondered what exactly blizzard uses for their normal map, and this seems to be it! :)
Edit: Aargh. Some models still have the issue. This is getting weird. I've checked and DXT5Nm is just like the manual swapping. I'm now confused as to why the defiler's normal map alpha channels changed somewhat when saved using dxt5nm.
Anyway, I've noticed the Blizzard Carrier also have some inverting occurring at certain parts of the model, at certain lighting angles.
I'm leaving this to the experts. I just don't know enough regarding what's really happening.
As mentioned, I hadn't really used any of nVidia's plugins or tools but I am familiar with the sample code nVidia supplies (I'm using it in my own plugin). I'm a little surprised to hear what it's doing when you choose DXT5nm, but...
Basically, there are two 'Normal Map' related issues:
1. as we've been discussing, Channel-swapping / rearranging is needed. It sounds like the plugin is not really doing this (or at least not doing it correctly) for you.
2. a separate issue related to Normal Map storage in .dds files is... the type of compression used. When you tell the plugin to save in the DXT5nm format, the overall format is the same as regular DXT5, but the path through the compression code is slightly different, to help keep the XYZ Normal data from being inappropriately smoothed/averaged/blended (in other words, it knows that it's going to be used as Normal Map data, instead of 'color' data).
...since it's not really handling the channel-swapping (correctly) for you, my suggestion would be to just do that by hand (manually copy the red->alpha, then clear the red and fill the blue), then just save it as regular DXT5. By using DXT5, you don't get the (mostly minimal) benefit of item #2, above, but in many/most cases you might not be able to tell the difference anyway.
Now, having said all that... I'm still not clear how what you're doing will help with the "shared-uv but mirrored polys" issue... unless the map looked bad on both sides - you could correct the part that got mapped incorrectly/inconsistently, but the other side of the mesh would still need an entirely separate (and inverted) bitmap to correct it.
Here's two shots. The lighting comes from +x+y and +x-y. Flipping the channels with different variations would just cause the problem to occur at say -x-y and -x+y. Other variations of flipping just moves the issue to become visible at different lighting angles.
The mesh is mirrored so at the center so that each side is a square. From this picture the mapping and mirroring comes from the left part.
As you can see, the left part has those lines raised, while the right part is a cut.
What model is that? I wasn't able to find the Defiler in the demo version data. I'd like to find one of these examples so I can see the actual Normal Map(s) and uv-mapping.
This is a model exported using NiN's scripts. The defiler is also one custom model. Do you still want the .m3 for the above model?
Attached is a zip containing the diffuse, normal map, and the m3.
The normal map inside the zip file is one where both alpha and green are multiplied with the blue channel. the Png below is the raw tangent space normal map generated from Xnormal.
Thanks, I'll take a look at that. In the meantime... if you are generating these models from scratch, why not just 'fix' (ie. to not share/mirror) the uv-mapping from the start? I assume it's being done this way to optimize pixel-space, but the model is not highly detailed and the 'bottom' of it wouldn't need to be as hi-res as the top, so you could afford to give the top more real-estate, if needed. Something more like one of the images below...
ATTACHMENTS
tuv1.jpg
tuv2.jpg
Rollback Post to RevisionRollBack
To post a comment, please login or register a new account.
Hi dropz,
Thanks - I didn't know there was a demo out (I should have looked :) ).
nvm... found it.
Edit:
Guys, I was wrong about the normal map issue. It's still there. Though, apparently only occurs at certain angles of the geometry and light.
If the mirrored face gets light as the other face is lit, it still becomes inverted.
Whatever you do, wholeheartedly, moment by heartfelt moment, becomes a tool for the expression of your very soul.
The above quote (from me) is either misleading or just wrong on some points... partly due to my own confusion and/or not tracking exactly what your script was doing (and what the conditions were at the time, etc). I don't know much about Max - let alone Max script - so I'm having to read up on it as I go and/or make some assumptions. There's also some confusion (on my part, or at least my description is sometimes confusing) related to Local vs Global matrices/transforms.
Anyway, what I said about the bone transform values (pos/rot/scale in the bone structure, that you use to make a 'base' pose) is correct - they make up essentially the exact same pose as the first frame of the first animation (which makes me wonder why they exist at all). The part that I mis-stated is when I suggested that it was wrong to combine those values with the IREF matrices - I hereby take that back :) - your script is doing that correctly.
The confusion is/was due to several factors...
...aside from the axis-swapping that I need to do for Cinema 4D, there are other differences that I need to handle or keep in mind that probably also add to my confusing explanations. For example, when I'm creating keyframes, I always need 'Local' (in coordsys parent) transforms... and that's actually how the data (rotations, in particular) is supplied already, so *I* don't need to combine the rotations with the IREF matrices, but for what you're doing, that's correct, etc. Frankly, I'm still confused why you need to do what you're doing with rotations - I would think that you could just set the .rotation values (in coordsys parent) instead of doing the .transform thing, but from your comments in the script, apparently you had some trouble with that.
Anyway, sorry for rambling - I just wanted to try to clarify that somewhat.
Speaking of the above... again, I don't know exactly how Max works, but it appears that when your script calls "M3I_Create_Skin", the "current frame" is still set at the "base pose" frame (frame 1) and not the "bind pose" frame (frame 0) - I didn't see anything between the call to "M3I_Bones_Poses" and creating the skin that changed the current frame. If it makes a difference, I think you want to use the bind-pose frame to skin the meshes (ie. set to the IREF matrices).
That's an interesting question... and (as you mentioned) the bones do have a 'flags' member that would seem to include flag bits for the above (see: http://code.google.com/p/libm3/wiki/BONE ), but interestingly, those flags don't seem to be set on the models I've looked at so far - at least I haven't noticed any. I'm currently logging lots of info when reading .m3 files, so I get something similar to the following for each bone...
...(those are my own internal variable names and in some cases user-import-scaling (and axis-swapping) has been applied to the logged values to make it easier for me to debug/compare to what I'm seeing in the app).
So far, I'm mostly seeing the UNKNOWN (bit 14) flag set on most bones (unless the m_flags == 0) and then ANIMATED for bones that have animation data and SKINNED for any that are involved in mesh skinning, but I'm not seeing any other flags set (though I'm sure one or both of the BILLBOARD flags are used on some models).
The lower two bytes for the bone flags are always zero, so I guess it's not important as I thought. It's probably an old deprecated feature that's no longer being used.
BTW, unrelated to anything above, but I added some additional bone info tracking and noticed that the value before the Name (that everyone is referring to as 'd1') is NOT always -1... here's a bone out of the Zergling:
...I added code to prepend a bunch of asterix if the value was not -1, to make it easier to find them... note the value is '6' in the above bone. In that same mesh, I also found values of 5, 8, 9 and 0. Kinda seems like some index value, but I have no idea what it's used for - just thought I'd pass that info along while I was thinking about it. So far, s1, d2, d3 and d4 all seem to just be 0.
or it could be that particles use the bone link between two or more bones for particle flow. Just like splines?
Whatever you do, wholeheartedly, moment by heartfelt moment, becomes a tool for the expression of your very soul.
Any news on the scripts, NiN?
Regarding the normal map, I sort of multiplied the blue channel over the green and red for the defiler. Somehow it works. Though, I'm not sure if this is how it should be done, or how Blizzard does their normal maps.
Whatever you do, wholeheartedly, moment by heartfelt moment, becomes a tool for the expression of your very soul.
Glazing over the technical bits that don't make much sense to me, were any of the issues I spoke of before ever addressed? Smoothing groups not working on multi object materials (only applying to the first texture and not the others) and such?
I'm not sure I'm following that, but you might find my post at the bottom of this thread of interest, regarding how SCII normal maps are stored in .dds files.
Note that the blue (Z) value will be 're-computed' by the app after loading the .dds file (based on the red/green channels), so what you describe above would be... a bit odd :).
@Phygit: Go
I apologize. I don't understand. What I do with normal maps is copy red and paste to alpha, turn alpha into white, turn blue into black, save as. I use photoshop plugin.
Do you mean we need not do this and instead just save using the dxt5nm compression?
Regarding my post, this is the way I got to "fix" the inverted normal map at the mirrored geometry. I'm talking about an actual model in starcraft2 having inverted normal maps on the other side of a mirrored geometry.
Edit:
Phygit, man, thank you so much. The normal map is now fixed. At least for the defiler. I'll test the other ones later.
What I did was save a non-edited normal map (added in a black alpha channel) using Dxt5NM. This created rgba. Then I changed red to white and blue to black. Made sure the bumps are inverted and kaboom!
Whatever you do, wholeheartedly, moment by heartfelt moment, becomes a tool for the expression of your very soul.
So... saving in DXT5n (or DXT5nm) didn't automatically swap the channel data around for you? I hadn't tried any of nVidia's tools or plugins - but my own plugin (for Cinema 4D) does the swapping for you.
Anyway, it sounds like you figured it out, so I'm happy to hear that my post was useful :).
It did create rgba grays. Each of these channels somehow have changed. They've somehow become mixed, though I'm not sure how or what. Saving though did not make the red channel white and blue into black. That I had to manually make. However, the alpha channel contains red and something mixed into it. This goes for the green as well.
It worked for the defiler. I'm testing it for the other models later.
I've always wondered what exactly blizzard uses for their normal map, and this seems to be it! :)
Edit: Aargh. Some models still have the issue. This is getting weird. I've checked and DXT5Nm is just like the manual swapping. I'm now confused as to why the defiler's normal map alpha channels changed somewhat when saved using dxt5nm.
Anyway, I've noticed the Blizzard Carrier also have some inverting occurring at certain parts of the model, at certain lighting angles.
I'm leaving this to the experts. I just don't know enough regarding what's really happening.
Whatever you do, wholeheartedly, moment by heartfelt moment, becomes a tool for the expression of your very soul.
Ok, a couple thoughts...
As mentioned, I hadn't really used any of nVidia's plugins or tools but I am familiar with the sample code nVidia supplies (I'm using it in my own plugin). I'm a little surprised to hear what it's doing when you choose DXT5nm, but...
Basically, there are two 'Normal Map' related issues:
1. as we've been discussing, Channel-swapping / rearranging is needed. It sounds like the plugin is not really doing this (or at least not doing it correctly) for you.
2. a separate issue related to Normal Map storage in .dds files is... the type of compression used. When you tell the plugin to save in the DXT5nm format, the overall format is the same as regular DXT5, but the path through the compression code is slightly different, to help keep the XYZ Normal data from being inappropriately smoothed/averaged/blended (in other words, it knows that it's going to be used as Normal Map data, instead of 'color' data).
...since it's not really handling the channel-swapping (correctly) for you, my suggestion would be to just do that by hand (manually copy the red->alpha, then clear the red and fill the blue), then just save it as regular DXT5. By using DXT5, you don't get the (mostly minimal) benefit of item #2, above, but in many/most cases you might not be able to tell the difference anyway.
Now, having said all that... I'm still not clear how what you're doing will help with the "shared-uv but mirrored polys" issue... unless the map looked bad on both sides - you could correct the part that got mapped incorrectly/inconsistently, but the other side of the mesh would still need an entirely separate (and inverted) bitmap to correct it.
Here's two shots. The lighting comes from +x+y and +x-y. Flipping the channels with different variations would just cause the problem to occur at say -x-y and -x+y. Other variations of flipping just moves the issue to become visible at different lighting angles.
The mesh is mirrored so at the center so that each side is a square. From this picture the mapping and mirroring comes from the left part.
As you can see, the left part has those lines raised, while the right part is a cut.
Whatever you do, wholeheartedly, moment by heartfelt moment, becomes a tool for the expression of your very soul.
What model is that? I wasn't able to find the Defiler in the demo version data. I'd like to find one of these examples so I can see the actual Normal Map(s) and uv-mapping.
This is a model exported using NiN's scripts. The defiler is also one custom model. Do you still want the .m3 for the above model?
Attached is a zip containing the diffuse, normal map, and the m3.
The normal map inside the zip file is one where both alpha and green are multiplied with the blue channel. the Png below is the raw tangent space normal map generated from Xnormal.
Whatever you do, wholeheartedly, moment by heartfelt moment, becomes a tool for the expression of your very soul.
Thanks, I'll take a look at that. In the meantime... if you are generating these models from scratch, why not just 'fix' (ie. to not share/mirror) the uv-mapping from the start? I assume it's being done this way to optimize pixel-space, but the model is not highly detailed and the 'bottom' of it wouldn't need to be as hi-res as the top, so you could afford to give the top more real-estate, if needed. Something more like one of the images below...