• Showcase
  • Spine Pro Vtuber Working Prototype

Thank you so much for adding left/right pupil pitch/yaw threshold settings!! I have tried them and changing the threshold values actually helped to reduce the eye wobbling. Here is the result:

The settings for my model related the eyes are as follows:

  • left/right eye strength: 50
  • left/right pupil pitch strength: 10
  • left/right pupil pitch threshold: 0.15
  • left/right pupil yaw strength: 4
  • left/right pupil yaw threshold: 0.07

While testing these settings, I found I should modify my animations, so the model in the video above is updated. Here is the updated project files:
chara-for-Spine-Vtuber-Prototype_20230117.zip
As shown in the following image, I made the pose will not be changed immediately after detecting changes in eyelids and mouth movements captured in some animations:

This adjustment was made so that minute movement changes would not cause the eyelids or mouth subtle opening. Also, since I set the thresholds for pupil pitch higher values than for pupil yaw, I adjusted pupils in the pitch down/up animations do not move from frame 0 to frame 10, so that these do not appear as if the position suddenly jumps when the threshold is exceeded.

By the way, you said:

You cannot export those settings to .svp yet.

but somehow I can export the threshold settings to .svp. (Maybe you have updated this tool after replying to this thread?)

Anyway, I am happy with the results this time! There are some things I would like to fix in my rig (e.g., The half-eye pose is not very good, although I have adjusted it many times), but I think the current specification of this tool is already great for vtubing. I am looking forward to the day when facial expression animations can be added. Great work!! :yes: 😃

12 días más tarde

but somehow I can export the threshold settings to .svp. (Maybe you have updated this tool after replying to this thread?)

It has been a while since I worked on the source code. I forgot that I have a function that exports settings from a list of default setting values. I updated that list for the left/right pupil pitch/yaw threshold default values, therefore, they got exported. :lol:


Update 1.1.5

  • Add "Backface culling" checkbox to "Canvas Settings" menu. The backface of attachments are invisible if checked. The back side is kept consistent with Spine editor.

  • Add "Flip Horizontal" checkbox to "Canvas Settings" menu . When checked, the world X-axis is flipped.

  • Add empty animation tracks: custom tracks 1 to 4.

  • Add customizable expression buttons 0 to 20 underneath the rendering canvas. Button 0 resets all custom tracks thus removing all customizable expressions active. The buttons are usable once the expression slot has been setup.

  • Add customizable expression setup interface under "Model Settings" menu.

    1. Add customizable expression slot drop-down menu ( 1 to 20 ) in the setup. Each slot allow you to setup multiple custom track index ( 1 to 4 ), transitional animation, and animation loop that follow after the transitional animation.
      Include a button to add a setup interface row for adding more custom tracks ( 1 to 4 ), transitional animations, and animation loops.

    2. Include a button to add a setup interface row for adding more custom tracks ( 1 to 4 ), transitional animations, and animation loops.

    3. Add a button to remove the last setup interface row. You do not want any rows with incomplete information as you will not be able to finish setting up the customizable expression slot.

    4. Add a button to assign all the custom tracks ( 1 to 4 ), transitional animations, and animation loops to the customizable expression numbered slot ( 1 to 20 ).

    5. Each setup interface row has three parts: custom track index ( 1 to 4 ), transitional animation, animation loop. Custom track index ( 1 to 4 ) and transition animation are required for setup while animation loop is optional. The transition animation and animation loop input fields are drop-down menus that lists all the animation found in file ( .json | .skel ). Transition animation does not loop and the animation loop plays after the transitional animation. There is not any mix duration between transition animation and animation loop.

    6. The customizable expression slots are saved into and can be loaded from SVP file.

  • Remove animation selection from Single Value Properties drop-down list with "Model Settings"

https://silverstraw.itch.io/spine-vtuber-prototype

Updated model for testing custom expressions.

https://silverstraw.itch.io/spine-vtube-test-model

The expressions feature is really fun and wonderful!! 😃 I haven't been able to test it much because preparing the animations still takes more time, but here are the results of a quick test I did:

I know that the transition animations back to the default pause also need to be registered in the expression buttons, but I have not yet been able to do that at this time.

My current model is in a very half-assed state, but I'll leave the data here for anyone who wants to test it:
chara-for-Spine-Vtuber-Prototype_20230130.zip
When I make more improvements, I will share the data here again.

I should add another setting for 'returning to default pose' animation to the customizable expression feature. That way you save another expression slot.

After this feature, facial expression recognition AI should be as easy as setting which expression slot you want to use for the AI result.

2 meses más tarde

SilverStraw Wow, the tool can finally capture body movements!! That is definitely an inovative update 😃 I'm really looking forward to the day when it will be available!!

This is amazing! looking forward to creating a cute prototype for this when I have time too >.< thank you for your work!

2 meses más tarde

There is an updated tracking framework but I would have break down the application. It is almost like starting over. Wish me luck 🙏.

Some possible gestures if it works 😰.

Face gestures
1 - browDownLeft
2 - browDownRight
3 - browInnerUp
4 - browOuterUpLeft
5 - browOuterUpRight
6 - cheekPuff
7 - cheekSquintLeft
8 - cheekSquintRight
9 - eyeBlinkLeft
10 - eyeBlinkRight
11 - eyeLookDownLeft
12 - eyeLookDownRight
13 - eyeLookInLeft
14 - eyeLookInRight
15 - eyeLookOutLeft
16 - eyeLookOutRight
17 - eyeLookUpLeft
18 - eyeLookUpRight
19 - eyeSquintLeft
20 - eyeSquintRight
21 - eyeWideLeft
22 - eyeWideRight
23 - jawForward
24 - jawLeft
25 - jawOpen
26 - jawRight
27 - mouthClose
28 - mouthDimpleLeft
29 - mouthDimpleRight
30 - mouthFrownLeft
31 - mouthFrownRight
32 - mouthFunnel
33 - mouthLeft
34 - mouthLowerDownLeft
35 - mouthLowerDownRight
36 - mouthPressLeft
37 - mouthPressRight
38 - mouthPucker
39 - mouthRight
40 - mouthRollLower
41 - mouthRollUpper
42 - mouthShrugLower
43 - mouthShrugUpper
44 - mouthSmileLeft
45 - mouthSmileRight
46 - mouthStretchLeft
47 - mouthStretchRight
48 - mouthUpperUpLeft
49 - mouthUpperUpRight
50 - noseSneerLeft
51 - noseSneerRight
52 - tongueOut

Hand gestures
["None", "Closed_Fist", "Open_Palm", "Pointing_Up", "Thumb_Down", "Thumb_Up", "Victory", "ILoveYou"]

    @SilverStraw oh, a new MediaPipe model? That's super cool! Likely worth the time investment.

    10 días más tarde

    Misaki https://silverstraw.itch.io/spine-vtuber-prototype-2
    I need your help breaking it the application again 🤣. This is using the new facing tracking. I haven't added any tracking smoothing and I am not sure if it is needed. You can tell me your thoughts.

    Hopefully I can transfer the body animation code to the new body tracking without much problem like the face animation. The draw order won't be ready with the body tracking because I haven't figure a good solution to handle depth for Spine skeletons.

      SilverStraw The new face tracking seems very fine! 😃 You mentioned you did not add any tracking smoothing, but it already seems very smooth. The following video is the result of testing on my end:

      It may be hard to tell from the video, but I feel that the movement has become less wobbly, and when I want to stop the movement, it can be stopped properly.

      I'm looking forward to body tracking becoming available! 💓

      11 días más tarde

      2.0.1

      • Added a check box for enabling body pose tracking. It is located under "AI Settings" menu > "AI Tracking Modes" section > "Body Tracking".
      • Added a button for 3d plot graph showing body pose. It is named "Update 3D Plot" and is located under "Camera Settings" menu.

      https://silverstraw.itch.io/spine-vtuber-prototype-2

      Test model updated.
      https://silverstraw.itch.io/spine-vtube-test-model

        SilverStraw Awesome!! I tried it with your test model and it certainly did body tracking on my PC. It's absolutely fun to have the model on the screen react to my movements 😄 When I get a chance, I would like to modify my model for body tracking. Great job!! 🎉

        • Editado

        For people who don't have time to try it for yourself, I've recorded a screen video. This may be helpful for those who are having trouble understanding how to set it up:

        For your information, I was wearing a skirt when I recorded this, and I think that's why the tracking of my legs wasn't working very well.

        @Misaki I noticed that this body tracking works better when the whole body is within frame from my experience. It may not completely be your skirt's fault. If you have the chance could you test the body tracking wearing a skirt but have your foot within web camera frame. I think the body tracking would predict better results that way.

        The body tracking software makes more assumptive predictions, like when your legs and arms are not in frame.

        I might have the software check both if the knee and ankle are out of frame and prevent them from going swaying wildly. So far, the software checks those features individually.

        I mentioned this on Discord. The next tracking I should tackle is hand gesture since I think it's straightforward. Various hand gestures: closed fist, open palm, pointing up, thumbs down, thumbs up, victory sign, I❤️U sign. If the tracker detects any of the hand gestures with a high degree than swap out the hand components. I hope it is able to distinguish left and right hands.

        Just chiming in to say this is super cool! I should play with the new MediaPipe models one day as well and resurrect my vtubeing app 🙂

        10 días más tarde

        2.0.2

        • Add a pointing downward behavior for the Spine model legs when the tracked legs are outside the video frame boundary. The leg positions are more assumptive and less reliable when outside of video frame boundary. The legs should default to pointing downward ( standing position ) to look more natural than flailing legs.
        • Start to implement web parallel processing for browsers that support it. Multiple threads might speed up computation tasks.
        • Defer 3d graph plotting task to optimize canvas rendering.
        • Save 3d plot graph camera location when user rotates the graph. Note that the user can only rotate the graph when the graph is not being updated ("Update 3D Plot" is unchecked).
        • Replace the word "body" with "torso" in the animation names of "body_rotate", "body_scale_y", and "body_scale_x". Make these animation names less ambiguous.

        https://silverstraw.itch.io/spine-vtuber-prototype-2

        Test model updated.
        https://silverstraw.itch.io/spine-vtube-test-model

        16 días más tarde

        Spine Vtuber Prototype 2.0.3

        • Added a check box for enabling hand tracking. It is located under "AI Settings" menu > "AI Tracking Modes" section > "Hand Tracking".
        • Added "Debug Bounding Boxes" checkbox under "Canvas Settings" menu. Axis aligned bounding box and bounding polygon have separate color and opacity settings. Bounding polygons would help visualize collision shapes.
        • Fixed a lag bug caused from turning on and off the camera/video. The software activated body tracking too frequently in a short amount of time. Mitigated by slowing down the activation from microseconds to milliseconds.

        https://silverstraw.itch.io/spine-vtuber-prototype-2

        Spine Vtuber Prototype 2.0.4
        Implemented hand gesture tracking to the application. Two hands closer to the goal.

        https://www.deviantart.com/silverstraw/art/Spine2D-Vtuber-Hand-Gesture-Test-974400292

        • Fixed "Showing Landmarks" canvas bug where it was not showing. The canvas was erasing faster than it can draw due to the 2.0.3 lag bug. Synchronized the erasing only when it is ready to draw the next frame.
        • Added hand gesture tracking for closed fist, point up, victory sign, I love you sign, thumbs up. thumbs down. Hands open is assumed to be the default hand position.

        https://silverstraw.itch.io/spine-vtuber-prototype-2
        https://silverstraw.itch.io/spine-vtube-test-model