Nate
I looked at libgdx in runtime, and I saw this comment in PngExportTest.java that you mentioned:
// Copy the FBO to the pixmap and write it to a PNG file
Can I turn BufferedImage directly from pixmap, because the GIF compositing tool I found doesn't support pixmap directly. I want to synthesize the gif directly instead of exporting the png first, and if I don't do that, I'm going to export the png and then I'm going to use BufferedImage to read the png in, and I think it's probably more resource efficient to export the GIF directly.
In short, I want to turn pixmap to BufferedImage
Convert spine to gif through runtime
You can of course, but the ImageIO API is terrible. This may get you started:
/** If alpha is true, returns an RGBA buffer, else returns an RGB buffer. */
static public BufferedImage pixmapToImage (Pixmap pixmap, boolean alpha, @Null BufferedImage reuse) {
DirectColorModel colorModel;
if (alpha)
colorModel = rgbaColorModel;
else
colorModel = rgbColorModel;
if (reuse != null) {
if (reuse.getWidth() == pixmap.getWidth() && reuse.getHeight() == pixmap.getHeight()
&& reuse.getColorModel() == colorModel && reuse.getRaster().getDataBuffer() instanceof BufferRGBA rgba) {
rgba.setPixmap(pixmap);
return reuse;
}
reuse.flush();
}
DataBuffer buffer = alpha ? new BufferRGBA(pixmap) : new BufferRGB(pixmap);
SampleModel sampleModel = colorModel.createCompatibleSampleModel(pixmap.getWidth(), pixmap.getHeight());
var raster = new WritableRaster(sampleModel, buffer, new Point()) {};
return new BufferedImage(colorModel, raster, false, null);
}
static private final DirectColorModel rgbaColorModel = new DirectColorModel(32, //
0xff000000, 0x00ff0000, 0x0000ff00, 0x000000ff);
static private final DirectColorModel rgbColorModel = new DirectColorModel(24, //
0x00ff0000, 0x0000ff00, 0x000000ff);
static public class BufferRGBA extends DataBuffer {
protected ByteBuffer pixels;
int width, height;
public BufferRGBA (Pixmap pixmap) {
super(DataBuffer.TYPE_INT, pixmap.getWidth() * pixmap.getHeight());
setPixmap(pixmap);
}
public void setPixmap (Pixmap pixmap) {
if (getSize() != pixmap.getWidth() * pixmap.getHeight()) throw error();
width = pixmap.getWidth();
height = pixmap.getHeight();
pixels = pixmap.getPixels();
}
public void setElem (int bank, int index, int value) {
pixels.putInt(index, value);
}
public int getElem (int bank, int index) {
return pixels.getInt(index << 2);
}
public void toInts (int[] ints) {
IntBuffer buffer = pixels.order(ByteOrder.BIG_ENDIAN).asIntBuffer();
if (ints.length != buffer.remaining()) throw error();
buffer.get(ints);
}
}
static public class BufferRGB extends BufferRGBA {
public BufferRGB (Pixmap pixmap) {
super(pixmap);
}
public int getElem (int bank, int index) {
return pixels.getInt(index << 2) >> 8; // rgba -> rgb
}
public void toInts (int[] ints) {
int bytesPerLine = width * 4;
var line = new byte[bytesPerLine];
for (int i = 0, n = width * height; i < n;) {
pixels.get(line, 0, bytesPerLine);
for (int x = 0; x < bytesPerLine; x += 4) {
int r = line[x] & 0xFF;
int g = line[x + 1] & 0xFF;
int b = line[x + 2] & 0xFF;
ints[i++] = (r << 16) | (g << 8) | b;
}
}
pixels.position(0);
}
}
I think I got all the app classes. The rest are imported from ImageIO and libgdx.
The GIF format is simple, if very old. However, making a GIF can be complex because of temporal quantization: reducing to 256 colors over multiple frames, without the colors flickering across frames. I suggest using gifski if high quality is important. There is a Java wrapper that we created or you can run it from the CLI. Gifski makes high quality GIFs, but they are large. If you want smaller GIFs, you will have poor quality. With the Java wrapper you don't need ImageIO garbage.
Nate Thank you for your help, but I'm sorry to still disturb you. I encountered a problem while using Gifski. I tried to build this project in IDEA (using the build in menu), and he said it was completed. However, when I ran the test, he told me that gifski-java64.dll was missing, and detailed error information was posted on Gifski's Issues. Inside, I discover that you are the author of this project, so I think you should be very helpful for me.
Your error is:
Unable to read file for extraction: gifski-java64.dll
Gifski has a static initializer and SharedLibraryLoader loads the DLL:
https://github.com/badlogic/gifski-java/blob/master/src/main/java/com/badlogicgames/gifski/Gifski.java#L9
SharedLibraryLoader readFile
reads the file from a JAR file named gifski-java
or the classpath:
https://github.com/badlogic/gifski-java/blob/master/src/main/java/com/badlogicgames/gifski/SharedLibraryLoader.java#L117
Since the static initializer does new SharedLibraryLoader().load("gifski-java");
the JAR file it looks for by default is gifski-java
. Maybe that should be gifski-java.jar
? Anyway, you can have it look on the classpath if you call new SharedLibraryLoader().load(null);
before using other Gifski classes. That probably fixes it for you.
Sorry for taking so long to reply.
Thanks again for your help! (Although I didn't understand the gifski usage you provided at the end). But none of this matters, because I found a method called animData, which makes it possible to blend the two animations, so exporting the GIF alone is not the best result.
So I re-examined my requirements and made a scenario like this: I created a window in PyQt where I was going to refresh every 0.05 seconds and add images rendered in real time from libGDX. Then I bind the corresponding events in PyQt to the call function switch animation in the libGDX program.
Why do I use the GUI library Qt instead of the game engine to play Spine animation? Because I found that PyQt can achieve a hollowout transparent window when playing an animation with transparent parts. I hope to achieve this effect, but I did not find that libGDX can achieve this effect.
So why I use Qt is PyQt, of course, because I don't know how to use Java, so I ran PngExportTest before, and gifski did not implement it. The rest of my program is already implemented through PyQt, but I'm going to introduce libGDX and need to rewrite its core.
So now I want to know, how can I do this without saving the picture (I think it takes time to read and write files?) Pass image data in PyQt and libGDX. That is to say, when I render every frame of animation in real time, every image is passed out as a png IO stream.
Instead of PixmapIO.writePNG
you can get the RGBA pixels from pixmap.getPixels()
which returns a ByteBuffer. The pixmap is RGBA8888
so each 4 bytes stores RGBA for one pixel. Pixels are stored in rows that are each the width of the image * 4 bytes.
You can send the bytes without encoding them as PNG. That is probably best, but if you must send PNG then you could subclass FileHandleStream to implement write
and pass it to PixmapIO.writePNG
. That way the PNG is written to the stream you return from write
.
Using both Qt and libgdx seems quite heavyweight and a pain to synchronize and pass data via IPC. What is a "hollowout transparent window"? The LWJGL3 backend for libgdx has a setTransparentFramebuffer
method:
libgdx/libgdxblob/master/backends/gdx-backend-lwjgl3/src/com/badlogic/gdx/backends/lwjgl3/Lwjgl3ApplicationConfiguration.java#L171-L175
Some discussion here:
libgdx/libgdx6852
These two images show the actual window size of the two cases implemented with libGDX and PyQt respectively, and you can see that both are out of the scope of the actual image. But there's a difference here, there's one in PyQt
self.setWindowFlags(Qt.FramelessWindowHint | Qt.WindowStaysOnTopHint | Qt.SubWindow) #| Qt.WindowStaysOnTopHint
It allows me to hollow out areas that are completely transparent. That is, I want those marked parts to be seen as the window below, and the transparency of the actual window where I clicked on the image is not seen as clicking on this window, or is the bottom window the parent window?
Now I am wondering if I can specify a spine's animation and a time at which time the pictures should be extracted. I will use python's subprocess.Popen to accept them, but I don't know if they can meet the efficiency requirements.
Transparent windows are pretty low level, as is making clicks go through to the windows below. You may need to use OS specific APIs. It should be possible with libgdx alone, but will take some doing.
I've never done it, so I can't be of much help. Here's ChatGPT4's advice:
https://chat.openai.com/share/2d936ffd-7af4-43aa-899f-770c80cf7d54
You probably would use LWJGL3 for the backend with libgdx. LWJGL3 uses GLFW, so you can look at the backend and at GLFW for how the window is created. You can get the window handle from LWJGL3 or GLFW to make changes to the window they created.
You could also drop down to creating your own window that uses OpenGL and render with spine-cpp, you'd just lose all the nice things GLFW, LWJGL3, and libgdx are doing for you.
What you are wanting to do is pretty tricky, so will require a lot of legwork and experimentation.
So, this seems to go back to my original discussion, using cpp rendering...
I tried GUI and spine are the game engines that runtime supports directly, and I just managed to implement them in Qt. I intend to use Qt plus libgdx to achieve this effect.
The question is: If I can specify a spine's animation and a time at which time the pictures should be extracted.
As long as this efficiency is good enough to let the water pass.