Widescreen hack and some other fixes aka AiO Patch

Discuss Drakan: Order of the Flame with fellow players and post any technical problems here where an 'unofficial' support team will try and help you. Gameplay help questions can go here too.
UCyborg
Dragon
Posts: 433
Joined: Sun Jul 07, 2013 7:24 pm
Location: Slovenia

Re: Widescreen hack and some other fixes aka AiO Patch

Post by UCyborg »

Floating point depth buffer doesn't even exist in old Direct3D. It's not part of the spec. Although you get it through dgVoodoo if you explicitly request 32-bit buffer with mask 0xffffffff instead of the pure integer buffer, from what I was told it's because that's what modern cards support. So I should probably undo the part of the code that prefers 32-bit buffers with mask 0xffffffff and just make it like it was before and only allow mask 0x00ffffff.

That way, the new (non-standard compliant) code for handling floating point values will only be used when explicitly requested through Arokh.ini and not also in an odd case where some card actually supports full 32-bit integer buffer (in which case the results would be incorrect).

The point is, there is no clean way to tackle this. Slapping in more ugly hacks is a waste of time for no practical benefit. The perfect solution to avoid these issues already exists and it's called dgVoodoo2. It will also solve other problems that often manifest when the game tries to lock z-buffer for CPU access (lag spikes and screen corruption in this particular case). Not using it given the circumstances is just plain stupidity and ignorance.
"When a human being takes his life in depression, this is a natural death of spiritual causes. The modern barbarity of 'saving' the suicidal is based on a hair-raising misapprehension of the nature of existence." - Peter Wessel Zapffe

User avatar
Mechanist
Dragon
Posts: 303
Joined: Wed Mar 07, 2018 7:27 pm
Location: Poland

Re: Widescreen hack and some other fixes aka AiO Patch

Post by Mechanist »

UCyborg wrote: So I should probably undo the part of the code that prefers 32-bit buffers with mask 0xffffffff and just make it like it was before and only allow mask 0x00ffffff.
Wait, what?
Are you implying that there is no clean way to query the device caps through DirectX? That's... nuts!

Also, I'm having a hard time understanding what's happening here in the first place.
Clearly, my graphics card supports the 32-bit floating-point Z-buffer.
Equally clearly, the code from AiO 149 didn't detect it as such... why?
Looks like there is something that was preventing it from working as intended :?

In particular, wasn't DX supposed to be backwards-compatible? So why doesn't it fall back to an integer Z-buffer when the game requests to use only 24 bits? :?

UCyborg wrote: The perfect solution to avoid these issues already exists and it's called dgVoodoo2. (...) Not using it given the circumstances is just plain stupidity and ignorance.
Actually there is at least one good reason to NOT use it... incidentally, this is also the very reason I'm not using it either (forcing antialiasing via Radeon Settings instead).
It makes the ingame text blurry and hard to read if antialiasing is enabled.
(I presume it's because it uses FXAA - instead of implementing real AA, such as what happens through the graphics driver settings)

Not much of a problem in the menus (and thus in singleplayer) - but it's a PITA in multiplayer, where you only have a short time before the message disappears, and it's shown against the background of whatever part of the level happens to be in the upper left corner of the screen.

The default install settings for the Community Patch do include dgVoodoo2 with AA turned on - but still, some people prefer not to use it, for the reason listed above.

UCyborg wrote:The perfect solution (...) dgVoodoo2.
It would be essentially perfect, if not for the bloody antialiasing issue.

I dare say that the in-game graphics do look nicer with the "softening" effect of FXAA; it's just that the text readability is a major drag.

It doesn't help that Drakan will happily render the white message text against a light gray or near-white background of the clouds in the sky, for example.

UCyborg wrote:Not using it given the circumstances is just plain stupidity and ignorance.
That was rude and uncalled for.
Although personally I take no offense here, some of the forum and/or Discord users might not like being referred to like that.

Yes, I agree that it's best to use it on general principles; especially in a deployment context such as this.
I also do have my own opinion of the people who don't use it, but I prefer to keep it to myself.

UCyborg
Dragon
Posts: 433
Joined: Sun Jul 07, 2013 7:24 pm
Location: Slovenia

Re: Widescreen hack and some other fixes aka AiO Patch

Post by UCyborg »

dgVoodoo uses plain MSAA. The text is perfectly clear on my end with its MSAA, even with upscaled game resolution from 1280 x 720 (in-game setting) to 3840 x 2160 and back down to my screen's native 1920x1080 (NVIDIA DSR), so I don't know what to say regarding your problems... No problems using native resolution directly without any upscaling neither, no notable difference in text clarity with dgVoodoo's MSAA either on or off.

DirectX 6.1 is from the late nineties. Back then, it seems floating point math was avoided when possible for performance reasons. Newer depth buffer formats are only defined starting with DirectX 9 or so. Intel and NVIDIA's drivers are still behaving according to the spec in that regard, just AMD's drivers are special. They have been historically problematic in general, doesn't seem much has changed. Maybe accessing z-buffer directly isn't frequently used trick neither, in which case it doesn't matter at all how the buffer is actually stored.

Edit: What's actually happening is this:

Game calls IDirect3D3::EnumZBufferFormats.

Direct3D calls the callback function, which pointer is passed to EnumZBufferFormats.

The callback function receives information about supported depth buffer formats (it's called once for each supported format, with the information about each format received through the pointer to DDPIXELFORMAT struct that is passed as the first parameter).

The game can make decisions from the information received on each call; if the format matches the criteria it likes, the DDPIXELFORMAT struct content is copied for later use if the format in question is better than the one received previously. When it receives the format it considers ideal, it can return DDENUMRET_CANCEL (0) to not receive information about other supported formats (if any still exist). Returning D3DENUMRET_OK (1) indicates we're interested in further potentially supported formats on the machine. Most games probably only care about the buffer bitness, not the mask, which is only required to know for tricks like Drakan does. Otherwise, we can just select the one with highest bitness and be done with it.

What goes wrong here is this; as said earlier, only integer formats exist in old Direct3D and no game coded in that time would ever expect floating point values in the depth buffer. That's why there's no way to tell on the API side.

Direct3D is indeed meant to be backwards compatible, but all the heavy lifting is up to the driver. Direct3D is just a device independent interface to tell the graphics card what to do. So Direct3D must tell the driver what should happen on the screen. Remember when old Direct3D is involved, different codepaths in the driver will be used.

This is where AMD's driver handling of legacy Direct3D fails; even though only integer depth buffers are defined on the API level, accessing z-buffer will reveal float values. Do note that the mask is irrelevant for distinguishing between integer and floats.

I only changed the callback recently to prefer the format with mask 0xffffffff, because with dgVoodoo, you're guaranteed the floating point buffer that way and we can avoid converting float value down to integer 24-bit format in the lens flare function to compare against. I'm talking about the value that is received in the lens flare function that doesn't come from the depth buffer, the one that's used to compare against the values in depth buffer to determine visibility of lens flare in question. This should still work without dgVoodoo on most cards since they don't ever report the buffer with mask 0xffffffff. It would however fail on an odd card that supports proper old Direct3D compliant full 32-bit integer buffer. With most cards, only 24 bits of the 32-bit buffer are used for depth, the remaining 8 represent stencil buffer, hence the mask 0x00fffffff. And as I pointed out earlier, only integer depth buffers are a thing in old Direct3D. That's why I said I should change the callback function back to how it was for correctness sake (this is separate from other related code changes). The current way doesn't help us with our problems, it just avoids a small piece of code in lens flare function when dgVoodoo is used (which, when it comes to full blown 32-bit integer buffers, isn't old Direct3D compliant neither). It also breaks on that odd card that happens to support proper old Direct3D compliant full blown 32-bit integer buffers, without any masked out/reserved bits.

This is where explanation for Force32BitDepthMask option comes in. Basically, it's an ugly hack that relies on the user to recognize his card's drivers don't expose the depth buffer in a way that is expected when old Direct3D is involved. Since we won't ever pick a depth buffer with a mask 0xffffffff, only 0x00ffffff in the callback function I talked about earlier (at least after reverting the change I mentioned previously), we can have a convention that says: give the lens flare function the fake mask (0xffffffff) instead of the real one we chose for the depth buffer format when we enumerated supported depth buffer formats as an indication that it should treat the values in depth buffer as float. So we're using fake depth mask like that just as convention to work around driver problems. It's impossible to use it to distinguish between ints and floats. For that, there would have to be a separate way to ask for that, but old Direct3D doesn't have it and drivers are expected to only ever expose the buffer in some sort of integer format.

The ideal fix would be to AMD to fix their drivers. I had another idea of guessing the format from whether the 8 most significant bits in the value of the buffer are cleared or not. Since I don't have the exact knowledge the engineers that designed these things do, I'd rather not make assumptions.
Last edited by UCyborg on Fri Aug 31, 2018 8:57 pm, edited 1 time in total.
"When a human being takes his life in depression, this is a natural death of spiritual causes. The modern barbarity of 'saving' the suicidal is based on a hair-raising misapprehension of the nature of existence." - Peter Wessel Zapffe

User avatar
Mechanist
Dragon
Posts: 303
Joined: Wed Mar 07, 2018 7:27 pm
Location: Poland

Re: Widescreen hack and some other fixes aka AiO Patch

Post by Mechanist »

Clearly, there is a difference. And it's not just me; a few other users have reported the same thing, so it's not just a case of my eyeballs being out of calibration.

Without dgVoodoo2 (and with 4xEQ antialiasing forced via Radeon Settings):
text without dgVoodoo.png
text without dgVoodoo.png (1.14 KiB) Viewed 124335 times

With dgVoodoo2, with antialiasing set to 4x:
text with dgVoodoo.png
text with dgVoodoo.png (2.57 KiB) Viewed 124335 times

By the looks of it, I'd say that's FXAA, which is little more than a very special blur function.
Even if it's actually MSAA as you claim, it's still being applied indiscriminately to the entire viewport - including the parts that should never receive any antialiasing in the first place.

I have no idea what causes this problem... all I know is that somehow, the Radeon (and nVidia) drivers can do it correctly, yet dgVoodoo fails miserably in that regard.

I can still read that text... but it's more difficult. Even though I still have fairly decent eyesight for my age... and I'm sitting very close to the monitor; not even 1m away from the 24" screen, in fact.
It's obvious that anyone with even a minor vision impairment would find it even more difficult to read, since it is already blurred by the time it's displayed, on top of the blur caused by the vision impairment.

Also, blurry text makes my eyes hurt if I keep looking at it for more than several seconds... but that's not the issue here; although I do keep ClearType disabled with extreme prejudice because of that.

And BTW... yes I did try messing with the dgVoodoo2 settings... ALL of them, in fact. No dice.
Even tried messing with the settings in the Radeon control panel, but those don't seem to do much (if anything) when using dgVoodoo (I suppose that's actually the intended behavior in that particular case).

UCyborg
Dragon
Posts: 433
Joined: Sun Jul 07, 2013 7:24 pm
Location: Slovenia

Re: Widescreen hack and some other fixes aka AiO Patch

Post by UCyborg »

I edited my previous post, hope that wall of text makes some sense. I'll respond to the rest the first chance I get.
"When a human being takes his life in depression, this is a natural death of spiritual causes. The modern barbarity of 'saving' the suicidal is based on a hair-raising misapprehension of the nature of existence." - Peter Wessel Zapffe

User avatar
Mechanist
Dragon
Posts: 303
Joined: Wed Mar 07, 2018 7:27 pm
Location: Poland

Re: Widescreen hack and some other fixes aka AiO Patch

Post by Mechanist »

UCyborg wrote: Fri Aug 31, 2018 5:54 pmThis is where AMD's driver handling of legacy Direct3D fails; even though only integer depth buffers are defined on the API level, accessing z-buffer will reveal float values. Do note that the mask is irrelevant for distinguishing between integer and floats.
(...)
The ideal fix would be to AMD to fix their drivers. I had another idea of guessing the format from whether the 8 most significant bits in the value of the buffer are cleared or not. Since I don't have the exact knowledge the engineers that designed these things do, I'd rather not make assumptions.
Hmm, what about some heuristic method of determining the integer vs float format at runtime?

You said that accessing the Z-buffer directly is an uncommon thing to do; that implies there is some other way to access the Z-buffer that would normally be used instead.

If that's the case (and note that I don't know the first thing about that), would it work to write a few dummy integer values through the intended method, and then read the buffer directly and search it for those integers that have just been written?

My understanding is, if the buffer is actually integer, then these values will reside in it somewhere, where they can be read back directly. But if it's actually float, the (intended) way of writing to the Z-buffer will have converted the ints to floats, which have a totally different binary representation.

So here's how this would look like, at least according to my very limited understanding of the problem:
  • Around the time the first frame gets drawn to the screen, write some dummy 24-bit integer values to the Z-buffer (not directly, but via the API?),
  • Access the Z-buffer directly for reading,
  • Iterate reading through its contents, masking off the 8 most significant bits and comparing against each of the integers that have been written,
  • If at least as many of them have been found as have been written, then it's very likely that it's an integer buffer,
  • Otherwise (if the amount of values found is less than the amount written), it's probably safe to conclude that it's a float buffer,
  • Clean up the garbage that had been written, to prevent it from corrupting the viewport.
So, assuming that would work at least in principle, the next obvious step would be to have a setting in Arokh.ini that allows the behavior to be selected for the 32-bit float buffer: off, autodetect, or force on.

User avatar
Mechanist
Dragon
Posts: 303
Joined: Wed Mar 07, 2018 7:27 pm
Location: Poland

Re: Widescreen hack and some other fixes aka AiO Patch

Post by Mechanist »

Well, oops, I just found some bugs in my previous changes to the Editor... the hard-crashing type of bugs, that is.

I've updated my previous posts with the new (fixed) code and the new Level Editor.exe - now it should work as intended, instead of crashing randomly while trying to use the new functionality.

UCyborg
Dragon
Posts: 433
Joined: Sun Jul 07, 2013 7:24 pm
Location: Slovenia

Re: Widescreen hack and some other fixes aka AiO Patch

Post by UCyborg »

I accidentally messed up the lens flare visibility code so it didn't work quite properly, flares might render when obscured by something else or vice-versa. Fixed version

You said that accessing the Z-buffer directly is an uncommon thing to do; that implies there is some other way to access the Z-buffer that would normally be used instead.

Actually, it's called locking for CPU access, so that part of video RAM is mapped in the process address space. And yeah, it's rarely done. Bad thing to do I think. It's normally accessed and updated when issuing drawing calls, so objects aren't drawn over each other. So if you have the scene with the cube on the floor, if cube is drawn before the floor, drawing floor afterwards doesn't overdraw parts of the cube. It happens transparently. I don't know why or how are Drakan's lens flares special. Many games render that and similar things without doing anything weird and they have no visibility related problems.

Even if you know depth buffer contains float values, you still have this problem: there's no guarantee that merely issuing the lock call will work as expected, forget actually reading anything. The current workaround with the .ini setting keeps the clutter to the minimum and may work if one insists on not using dgVoodoo, but it's not guaranteed.

I've tried dgVoodoo on the laptop with Radeon R2, with forced MSAA, text is still perfectly clear. The only thing that blurs the text is "Filtering" set to anything else but "App driven". When running without dgVoodoo, forcing anisotropic filtering through drivers doesn't have any effect on the image, on neither NVIDIA or AMD machine.

Newer depth buffer formats are only defined starting with DirectX 9 or so. Intel and NVIDIA's drivers are still behaving according to the spec in that regard, just AMD's drivers are special. They have been historically problematic in general, doesn't seem much has changed.

Hm, 6 years ago, they were certainly a lot of fun from my experience. Can't say I remember having any strange problems in more recent times (excluding that thing Drakan does). Also realized that forcing MSAA in Drakan on Radeon R2 isn't really expensive performance-wise, lens flares are and dgVoodoo's fast video memory access isn't as helpful as on more poweful machines.
"When a human being takes his life in depression, this is a natural death of spiritual causes. The modern barbarity of 'saving' the suicidal is based on a hair-raising misapprehension of the nature of existence." - Peter Wessel Zapffe

User avatar
Mechanist
Dragon
Posts: 303
Joined: Wed Mar 07, 2018 7:27 pm
Location: Poland

Re: Widescreen hack and some other fixes aka AiO Patch

Post by Mechanist »

UCyborg wrote: Mon Sep 03, 2018 3:22 am I accidentally messed up the lens flare visibility code so it didn't work quite properly, flares might render when obscured by something else or vice-versa.
Ahh, so I wasn't imagining things when the lens flares were unexpectedly not appearing (or disappearing) when viewed from certain angles, even when they should have been perfectly visible :)

UCyborg wrote: Mon Sep 03, 2018 3:22 amI've tried dgVoodoo on the laptop with Radeon R2, with forced MSAA, text is still perfectly clear. The only thing that blurs the text is "Filtering" set to anything else but "App driven". When running without dgVoodoo, forcing anisotropic filtering through drivers doesn't have any effect on the image, on neither NVIDIA or AMD machine.
Huh. Well, that explains it then, since I can't remember having ever run any game which allows AA (in any way, shape or form) without also having anisotropic filtering enabled.

And yes, I also didn't notice any significant visual difference when anisotropic is enabled through Radeon Settings - but I just chalked it up to Drakan's textures being mostly 16-bit and quite low-res compared to "more modern" games, which I assumed would make any such differences less visible.

UCyborg
Dragon
Posts: 433
Joined: Sun Jul 07, 2013 7:24 pm
Location: Slovenia

Re: Widescreen hack and some other fixes aka AiO Patch

Post by UCyborg »

Yeah, these extras are at own risk. We just happen to have a workaround that makes anti-aliasing sort of compatible. It appears text behaves like any other texture, so it's affected badly by forcing filtering. But game's own code can exclude it when applying filtering.

One more thought about heurestics when it comes to depth buffer. Maybe adding 2 checks, one that checks buffer bitness to confirm we were able to obtain 32-bit one and another that tests the 8 upper bits. Those bits normally represent the content of stencil buffer, which isn't used by the game, but I guess it's normally seen on cards that support it (been a thing for a long time, a gratis feature). If they're not zero, we're probably not looking at stencil data at all, but part of (float) depth data.

It appears asking for pure 24-bit buffer on today's hardware still gets you 32-bit one, probably due to nature of how hardware works internally (always gratis stencil buffer if interest is getting at least 24-bit z-buffer) and this behavior might be just to make picky games happy when selecting the format. My guess would be that due to inefficieny of doing things 24-bit way, there might be no reason for this to work differently on older hardware neither. If you want to navigate the buffer correctly, whether each value in array takes 3 or 4 bytes is important. Drakan used to insist on selecting pure 24-bit one, which meant that the lens flare code didn't actually get correct data to be able to navigate it properly in most cases.

If the stencil buffer tends to behave like described, then maybe that type of heuristic is not too bad.
"When a human being takes his life in depression, this is a natural death of spiritual causes. The modern barbarity of 'saving' the suicidal is based on a hair-raising misapprehension of the nature of existence." - Peter Wessel Zapffe

UCyborg
Dragon
Posts: 433
Joined: Sun Jul 07, 2013 7:24 pm
Location: Slovenia

Re: Widescreen hack and some other fixes aka AiO Patch

Post by UCyborg »

I can confirm there's no guarantee those bits will be clear and assumptions about the buffer presented through a pointer obtained by locking do not always apply.
"When a human being takes his life in depression, this is a natural death of spiritual causes. The modern barbarity of 'saving' the suicidal is based on a hair-raising misapprehension of the nature of existence." - Peter Wessel Zapffe

UCyborg
Dragon
Posts: 433
Joined: Sun Jul 07, 2013 7:24 pm
Location: Slovenia

Re: Widescreen hack and some other fixes aka AiO Patch

Post by UCyborg »

Uploaded it again. Now I figured Surreal must have gone constructing the mask from the bitness value instead of just reading it from the API because sometimes (often?) only 0xffffff00 is ever reported instead of 0x00ffffff (why??)...so it was a workaround for long broken behavior and not overlooking the mask variable in DDPIXELFORMAT struct. Happens on my Windows XP install (native, last NVIDIA XP driver) and on all VMware virtual machines regardless of the OS. Interestingly, I haven't seen the wrong mask being reported for 16-bit z-buffer.
"When a human being takes his life in depression, this is a natural death of spiritual causes. The modern barbarity of 'saving' the suicidal is based on a hair-raising misapprehension of the nature of existence." - Peter Wessel Zapffe

User avatar
Mechanist
Dragon
Posts: 303
Joined: Wed Mar 07, 2018 7:27 pm
Location: Poland

Re: Widescreen hack and some other fixes aka AiO Patch

Post by Mechanist »

Very odd. Does the reported mask actually correspond to the buffer format, or is it just a bug caused by someone not paying attention to the endianness?

UCyborg
Dragon
Posts: 433
Joined: Sun Jul 07, 2013 7:24 pm
Location: Slovenia

Re: Widescreen hack and some other fixes aka AiO Patch

Post by UCyborg »

The assumption has always been that the values you get through the pointer are little-endian. This also applies in practice, haven't seen it any other way. The only thing that varies is reported mask. I'm seeing 2 different formats reported on my main install, one with mask 0x00ffffff and the other with 0xffffff00. Doesn't matter which one is picked, the content is always little-endian.

I don't know what to think of any of this. Both WineD3D and dgVoodoo are behaving saner, no wrong masks reported.
"When a human being takes his life in depression, this is a natural death of spiritual causes. The modern barbarity of 'saving' the suicidal is based on a hair-raising misapprehension of the nature of existence." - Peter Wessel Zapffe

User avatar
Mechanist
Dragon
Posts: 303
Joined: Wed Mar 07, 2018 7:27 pm
Location: Poland

Re: Widescreen hack and some other fixes aka AiO Patch

Post by Mechanist »

OK, the last of my changes to Level Editor.exe for the foreseeable future:
  • In the "landscaping tool", the "Brush size" slider has had its sensitivity nerfed by 16x - previously it was impractically huge, with only the first 2 "ticks" being at all useful. This involved changing the opcodes at 484E89 and 00484EA7 from SHL ESI,2 to SHR ESI,2 (no other sliders have been modified).
  • Enormously expanded the functionality of "File :arrow: Import :arrow: Heights from Bitmap". Description below.
The original "Heights from Bitmap" import functionality is... lackluster... to say the least.
The way it works is that for each layer currently selected, it brings up a dialog to open a .BMP file (must be 8-bit grayscale), and somehow maps these 8 bits of grayscale to 16 bits of the possible layer vertex heights.

Also it has some bizarre scaling properties, since you'd expect that each pixel would preferably map to each vertex - no, instead it somehow maps each pixel to a tile, which makes no sense whatsoever.

Needless to say, the functionality it offers is largely crippled and has a very limited range of practically feasible uses.

I needed to expand on that for the map I'm working on - so I wrote a DLL (DEM.dll) that does what I want, and modified the Editor's code to have this DLL loaded on demand.
Now if no layers are selected as the height import target, it launches my DLL, which processes all of the currently visible floor layers.

The DLL has only 1 exported function that contains all the logic, and takes 2 parameters: handle to the Editor's window, and pointer to the struct that holds information about all the layers.
My DLL reads and writes directly from/to the Editor's memory; the Editor's code is only responsible for checking if the conditions for the DLL's invocation have been met, invoke it with these 2 parameters, then unload it again immediately after it finishes doing its thing.

First, the change to the Editor's existing code for handling the "Heights from Bitmap" OnClick event:
0043178E 7E 35 JLE SHORT Level_Ed.004317C5
00431790 55 PUSH EBP
00431791 33ED XOR EBP,EBP
00431793 8B46 14 MOV EAX,DWORD PTR DS:[ESI+14]
00431796 8B0CB8 MOV ECX,DWORD PTR DS:[EAX+EDI*4]
00431799 8B51 08 MOV EDX,DWORD PTR DS:[ECX+8]
0043179C F6C2 01 TEST DL,1 ; Check if layer is selected for editing
0043179F 74 15 JE SHORT Level_Ed.004317B6
004317A1 F6C2 02 TEST DL,2 ; Check if layer is hidden
004317A4 75 10 JNZ SHORT Level_Ed.004317B6
004317A6 FF7424 14 PUSH DWORD PTR SS:[ESP+14]
004317AA E8 C14B0000 CALL Level_Ed.00436370
004317AF 08C3 OR BL,AL
004317B1 C646 2C 01 MOV BYTE PTR DS:[ESI+2C],1
004317B5 45 INC EBP
004317B6 47 INC EDI
004317B7 3B7E 1C CMP EDI,DWORD PTR DS:[ESI+1C]
004317BA ^ 7C D7 JL SHORT Level_Ed.00431793
004317BC 85ED TEST EBP,EBP
004317BE - 0F84 57BD0C00 JE Level_Ed.004FD51B ; Jump to the new code if no layers have been processed here
004317C4 5D POP EBP
004317C5 8BCE MOV ECX,ESI
004317C7 E8 640D0000 CALL Level_Ed.00432530
004317CC 8BCE MOV ECX,ESI
004317CE E8 7D090000 CALL Level_Ed.00432150
004317D3 5F POP EDI
004317D4 8AC3 MOV AL,BL
004317D6 5E POP ESI
004317D7 5B POP EBX
004317D8 C2 0400 RETN 4

Then the actual new code that deals with the DLL invocation:
004FD51B 33FF XOR EDI,EDI ; This point can only be reached if NO layers were both selected AND visible at the same time
004FD51D 8B46 14 MOV EAX,DWORD PTR DS:[ESI+14] ; But it's also possible that ALL the layers were hidden
004FD520 8B0CB8 MOV ECX,DWORD PTR DS:[EAX+EDI*4] ; That's a pathological case, in which the importer has nothing to do anyway
004FD523 F641 08 02 TEST BYTE PTR DS:[ECX+8],2
004FD527 74 01 JE SHORT Level_Ed.004FD52A
004FD529 45 INC EBP ; EBP = always 0 at the start of this loop
004FD52A 47 INC EDI
004FD52B 3B7E 1C CMP EDI,DWORD PTR DS:[ESI+1C] ; Check ALL the layers!
004FD52E ^ 75 ED JNZ SHORT Level_Ed.004FD51D
004FD530 3BFD CMP EDI,EBP ; Won't be equal if at least 1 layer wasn't hidden
004FD532 75 05 JNZ SHORT Level_Ed.004FD539
004FD534 - E9 8B42F3FF JMP Level_Ed.004317C4 ; Bail out if all the layers were hidden
004FD539 68 44494D00 PUSH Level_Ed.004D4944 ; "DIM"
004FD53E 68 44454D00 PUSH Level_Ed.004D4544 ; "DEM"
004FD543 54 PUSH ESP
004FD544 FF15 54614A00 CALL DWORD PTR DS:[<&KERNEL32.LoadLibrar>; kernel32.LoadLibraryA
004FD54A 85C0 TEST EAX,EAX ; Did it work?
004FD54C 74 20 JE SHORT Level_Ed.004FD56E ; Bail out if it didn't
004FD54E 8BF8 MOV EDI,EAX
004FD550 58 POP EAX
004FD551 54 PUSH ESP
004FD552 57 PUSH EDI
004FD553 FF15 58614A00 CALL DWORD PTR DS:[<&KERNEL32.GetProcAdd>; kernel32.GetProcAddress
004FD559 85C0 TEST EAX,EAX ; Did it work?
004FD55B 74 12 JE SHORT Level_Ed.004FD56F ; Bail out if it didn't
004FD55D 59 POP ECX
004FD55E 56 PUSH ESI ; Pointer to the layer meta-metadata struct
004FD55F FF7424 18 PUSH DWORD PTR SS:[ESP+18] ; Handle to the calling window
004FD563 FFD0 CALL EAX ; Call the one and only export of the DEM importer DLL
004FD565 57 PUSH EDI
004FD566 FF15 5C614A00 CALL DWORD PTR DS:[<&KERNEL32.FreeLibrar>; kernel32.FreeLibrary
004FD56C EB 02 JMP SHORT Level_Ed.004FD570
004FD56E 59 POP ECX
004FD56F 59 POP ECX
004FD570 - E9 4F42F3FF JMP Level_Ed.004317C4 ; Return control back to the Editor

UCyborg: since, as I said, I won't be making any further changes to the Editor anytime soon, feel free to incorporate the modified version into your AiO patch - especially since without it, it's not possible to import and use 32-bit textures, which are now otherwise fully functional.

The only minor snag in the current version is that the form created by my .DLL is supposed to be modal: it shouldn't be possible to even bring the Editor's window into focus, but it is.
It's still largely nonfunctional (because its code execution is stuck on waiting for my DLL to close), but it's nonetheless possible to bring it into focus and click the buttons.
I haven't tracked this issue down yet, since I was primarily focused on getting the import functionality to work in the first place. Now that I'm essentially done with it, I'll try to look into it when time permits.

I've also included the current "alpha build" of my DEM.dll - it's strictly optional to include it; it's not required for the Editor to work.
The DLL only gets loaded when "Heights from Bitmap" is clicked without any layers having been selected, and even then the code is intended to fail silently if the DLL is missing or corrupt ("You click the button! Nothing happens!").
It should go without saying, but the DEM.dll needs to go in the same folder as the Editor's executable.

Things remaining on the todo list:
  • Compile a changelog for the Level Editor + readme explaining the new texture-related functionality,
  • Write a readme for the DEM importer,
  • Test the DEM importer extensively in an actual usage scenario,
  • Polish up the DEM importer (eg. add hover hints and some context help),
  • Compile an illustrated guide explaining the whole DEM import process, starting with obtaining the source data.
Attachments
DEM.zip
(816.39 KiB) Downloaded 3143 times
Level Editor.zip
(459.25 KiB) Downloaded 3089 times

Post Reply