• Editor
  • Position of Different Skins Can't Be Consistent

So here in the first image I've positioned her rifle so she's sort of holding it at the stock right?

Now let's change it to a double barreled-shotgun and she's not holding it in the right place. This is of course because the image is a different size. And I can't change the position here or it will then mess up the position for the rifle.

This is probably a common issue, so what is the typical workaround? Is there something I can do other than ensure the images are the exact same size?

Related Discussions
...

Hi,

Even if they are placed within the same skin placeholder, the position of attachments can be unique to each skin. For example, the following video uses the same hat image, but the Y position is changed by changing the active skin:

The exception to this is when using linked meshes, the position also refers to the source mesh, so it is not possible to have a unique position for each skin.

I hope this will help you.

Thanks I will try this out


Misaki escribió

Hi,

Even if they are placed within the same skin placeholder, the position of attachments can be unique to each skin. For example, the following video uses the same hat image, but the Y position is changed by changing the active skin:

The exception to this is when using linked meshes, the position also refers to the source mesh, so it is not possible to have a unique position for each skin.

I hope this will help you.

I think I figured out how to give skins different positions on setup. But I need to adjust their positions on each animation since the character animations show the character from multiple perspectives.

I think this has been solved on Discord (make the skin active so you can translate attachments).

I need to adjust their positions on each animation since the character animations show the character from multiple perspectives.

Usually with skins you reuse all your animations with a different look. Still, you can adjust the position of attachments based on which skin is visible in a couple ways:

  • Create a transform constraint that positions a bone how you want it, then add the constraint to your skin, so it is only applied when that skin is active.
  • Play an animation that keys bone translation or keys mesh deforms. This isn't super convenient, because you can only apply one animation at a time in the viewport. You can apply multiple animations in the Preview view though.
  • You could move bones at runtime. This is inconvenient since you can't see it in the editor.
  • You could change your approach so you have a skin per direction. This may be simplest, but means probably 4x or 8x more skins.

Setting up an character with multiple perspectives is complex. I think Erika has some resources for that she can share next time she's online.

20 días más tarde

I tried this idea which ought to have worked, but still doesn't. I created a magenta background for the weapons so that when the layer was converted to an image, each weapon would have the same canvas size (30x18). With the canvases the same size I just need to position the canvas in the animation.

However, as you can see, even though the canvases are all the same size and the position of the bone is the same, the skins now appear in slightly different places. It is off by a pixel. How is that possible?

In other words, the skins use images that are now the same size and they have the same positions (because they are skins) so they should appear in the exact same places right?

What is the position of the two attachments? I assume they are region attachments, so you'll need to go to setup mode and select them, then look at their translation. You can press ctrl+C to copy the attachment's transform, select the other attachment, ctrl+V to paste the transform. That's the same as typing the same transform values.

What if I made all the weapons the same canvas size by using this magenta background so that when the PSD layers are made into images, all the weapons have to use it so they are the same size?

This would work, but then I need to make this magenta color transparent in the final output somehow? Is there a way to do that on export settings?

Note attachments can be the same size but in a different position.

Spine doesn't have a way to remove the magenta. Ideally the background is transparent rather than magenta. To get that you'd need to export without Trim whitespace checked, or create the images without running the script.

It's a good approach and mentioned here:
Runtime Skins - Spine Runtimes Guide: Creating attachments

Using that approach, you don't need to rig your additional images in Spine. You can rig just one attachment of each type, like a template, so you know where it is positioned. Then you can create many images of the same size for that position. You can pack them in your atlas (or pack an atlas at runtime dynamically), then at runtime you create an attachment that references the atlas region. You'd place the new attachment in the same position as the "template" attachment.

A similar way to do it is the same but then modify the template attachment, changing its atlas region, rather than creating a new attachment. That can work when you only have one skeleton instance.

This approach allows you to have hundreds or thousands or more images without rigging them in Spine, which can be a huge time savings.

15 días más tarde

Thanks I think I will consider creating the attachments programmatically.


Nate escribió

Note attachments can be the same size but in a different position.

Spine doesn't have a way to remove the magenta. Ideally the background is transparent rather than magenta. To get that you'd need to export without Trim whitespace checked, or create the images without running the script.

It's a good approach and mentioned here:
Runtime Skins - Spine Runtimes Guide: Creating attachments

Using that approach, you don't need to rig your additional images in Spine. You can rig just one attachment of each type, like a template, so you know where it is positioned. Then you can create many images of the same size for that position. You can pack them in your atlas (or pack an atlas at runtime dynamically), then at runtime you create an attachment that references the atlas region. You'd place the new attachment in the same position as the "template" attachment.

A similar way to do it is the same but then modify the template attachment, changing its atlas region, rather than creating a new attachment. That can work when you only have one skeleton instance.

This approach allows you to have hundreds or thousands or more images without rigging them in Spine, which can be a huge time savings.

Another idea is to modify the shader so that it removes all magenta color pixels. I already have a shader modified that removes transparent pixels. Would this be a good idea or would there be a hit to performance?

Using a shader to customize rendering is a fine idea. There are many ways to do it and a lot depends on your game toolkit.

In general, when changing the shader (same as when changing the texture) that is being used to render, you need to first flush the pipeline so that it draws all the geometry that has been batched so far. Then you can draw your special stuff, flush again, and change the shader back to whatever is used for normal rendering. That means instead of drawing things in 1 batch, it takes 3. This of course only matters if you try it and find it affects your performance.

One way to avoid flushing is to use the same shader for all rendering. The new problem is then how to configure the shader appropriately for the normal and special rendering you do. If you are setting values on the shader, like a color to replace and the new color, then you will still need to flush when you change values. A solution is to store those values in the vertex data, which is the data stored for each vertex.

That is how we do tinting: every vertex stores a tint color. This way we can batch lots of geometry with different tint colors. To add your own vertex attributes you'd need to customize the rendering code.

Tell me if this is a bad idea: When drawing all the objects, some objects are drawn with a normal SpriteBatch in XNA while others are drawn as Spines. This means while going through all objects, sometimes the SpriteBatch must be closed, the spine is drawn (using a different shader), then the SpriteBatch is begun again. I'm thinking beginning and ending the SpriteBatch multiple times while switching shaders in the same frame is not good for performance?

Less batching is more efficient but like anything performance related, if the user never notices then it doesn't matter at all. Last I checked, quite some time ago, most mobile devices can easily do some 30-40 batches per frame. Desktop is generally much higher, hundreds if not thousands.

The only way to really know is to benchmark your actual app to see if there is a problem. For example, draw your entire game scene multiple times each frame until you see the framerate drop. If you can draw 10x or 100x what you actually need to draw just once, then it's not worth worrying about, depending on the hardware you want to target.

Nate escribió

Less batching is more efficient but like anything performance related, if the user never notices then it doesn't matter at all. Last I checked, quite some time ago, most mobile devices can easily do some 30-40 batches per frame. Desktop is generally much higher, hundreds if not thousands.

The only way to really know is to benchmark your actual app to see if there is a problem. For example, draw your entire game scene multiple times each frame until you see the framerate drop. If you can draw 10x or 100x what you actually need to draw just once, then it's not worth worrying about, depending on the hardware you want to target.

Good advice. Thanks