Friday, November 10, 2006

some more progress

OK. So far, I wrote 3 types of memory allocators, each coming to about ~400 lines of code. The first is the page allocator, second is a kmalloc on top of that. The third is the concept of an extremely basic memcache. Page allocator internal state depends on the memcache, and kmalloc depends on the page allocator. I've also spent considerable time for incorporating a test architecture, that builds a test executable linking with all these allocators. This accepts commandline arguments, and a python script runs random allocation tests on all three allocators using different page_size, allocation size, number of allocations and other similar parameters. It turns out this sort of unit/regression testing is very powerful, as each time I make a change in either of them, I can run these tests and a bug usually quite soon shows up. Also another very useful thing was using BUG() and BUG_ON() macros, which pretty much right after I ran the allocators were caught, and immediately pointed out at "what" was going wrong. A lot of saving of wasteful bug searching hours. I spent so much time on incorporating this "test" architecture, and the time I spent is justified really. The only problem is that I am finding difficulty picking up clues of bugs to test and compare. If you dig deep into the internal state of each allocator, the test cases get very specific, and when you make a rewrite of the allocator, you need to almost rewrite the test as well, which is bad. But then if you just focus on what the interface functions return, something might have gone wrong very early and show up very late after you do, say 100 allocations.

Anyways, the allocators are so far doing well. The page allocator - I plan to use as the primary page allocation scheme on the whole system. The microkernel will allocate all physical memory through these, and then grant them to pagers, which in turn will grant them (or just map) to userspace applications. But the microkernel must be in full control of all resources first.

The memcache - It incorporates a bitmap allocator for fixed size structures. I used it so far primarily for allocating page middle directories (pmd). Very efficient, useful and simple. Allocation and deallocation is o(1) (ignoring the stepping through bitmap words) thanks to the bitmap allocation scheme. Fragmentation is also none, since all structures get fully utilised. The only downside is one has to preallocate a chunk of memory initially for the memcache.

Kmalloc - I haven't made use of it yet. I originally wrote it with tcb allocations in mind, but I may just do that with the memcache.


More stuff I've done:
I've debugged add_mapping()/remove_mapping() functions. It took hell of a lot of time to find out the cause of an ever-occuring dreaded data abort during the flushing of the caches. It was impossible what the cause of this was, because it was the debugger that caused it, while stepping through the cache flush code. I still couldn't figure out why, but I know that it's the debugger because if I dont step through it aborts don't occur. But because I am pessimistic and really want to get things right before moving on, I thought it's a bug in my code, and tried hard to find out why it was happening.

I also gladly found out yesterday that QEMU supports the versatile pb926 platform. I booted my code on it with no modification. Also got insight working with QEMU. It's really fantastic the two together. This essentially means I can continue development anywhere I want with just a pc. Also it's great to have a v5 mmu-capable arm cpu model at hand.

That's it for now. Future plans: To get tcbs sorted, also loading of a userspace-task, and switching between the microkernel and that task. This thing is getting more and more fun. Especially while listening baroque music from www.bach-radio.com. Bach and Vivaldi really rock.

0 Comments:

Post a Comment

<< Home