Sunday, December 28, 2008

My GPU supports SM3.0

I just realized that my laptop's GPU has been a SM3.0 capable one all along. I have learnt about using programmable graphics pipeline this year itself. I am drooling at the thought of running vertex and pixel shaders (I am going to call fragment shaders as pixel shaders only) of my own. OK it is not a top of the line GPU, but hey, better than none.

When I first learnt graphics programming, it was using the lowly Turbo C++'s graphis capabilities. Then I moved onto OpenGL 1.1. I wrote a demo using display lists and stuff but then it seemed that it was more than enough for anything that I might need.

Then I discovered the joys of programmble graphics pipeline. It appeared to combine the power of Turbo C++'s low level and highly flexible power with the SGI's grpahics pipeline's inherent advantages.

Now, my Geforce 6150 (I never said it was top of the line) is ready to serve me in my pursuit of crazy shaders. Of course, I'll start small and then I plan to move the control into Python and leave the hevy lifting to C.

This also appears to be a great place to move into multithreading. Raw pthreads from C seems unwise. I would prefer to orchestrate the entire stuff from Python. Atleast the loading/storing of textures is anticipated to be simple enough. This also is a place where multithreading can help hide latencies.

Now I found the DevIL library for loading it seems very nice. Though I don't know if it is thread-safe. I hate it when I open docs for some library, only to find that it is not threadsafe. It is essential for concurrency that the libraries be threadsafe.

Thursday, December 25, 2008

2008: A Review

Today as I sit down to write this, one thought runs through my mind. That this is the year of change. Sure, stuff changes all the time and

Change is the only constant.

Yet I cannot shake off the feeling that this year, somehow, more things seem to have hanged than they usually do. Moreover, I see the changes in trends not just in new incidents. To summarize, (in no particular order)

1. GPGPU went mainstream. It got lots of attention from the tech press. I too read some stuff about programmable graphics pipeline. OpenCL spec was released, which I am sure is just the catalyst needed for gpu's to come in their own.

2. I wrote a program in two different languages, ie Python and C++. I suffered a lot on the way, but I had an optimized, vectorized (in parts) and cache aware BFS ready when I was done. And yes, I learnt about valgrind along the way.

3. India made history in many ways. We landed on the moon, had our best Olympics ever, had our break-out moment and won emphatically in Australia. Today, I see India, Australia and South Africa as forming a triad which will compete for the No. 1 slot into the next year.

4. Obama became the US president. It just had to make the list. And you know what it means.

5. I got an opportunity to go abroad (which is a first in my family), so a big deal for me.

6. The war on terror started going right for US and upside down for India. Yankees realized that ISI has been milking them all along and is actually busy screwing them royally in Afghanistan. Terror struck India like never before, and that's without 26/11.

7. Public coming out on streets to protest was a welcome change. However, the public anger needs to be better directed if we are to achieve our long term goals.

Monday, December 22, 2008

A quiet December

It has been a quiet december this year so far. I mean nothing on the blog, and Have been too lazy so far to bother doing much else so far in the holidays. The internet connection here has been acting up lately, in part because of the cable outage. Lot's of stuff has been flowing through my mind these past few days. Will summrize it soon.

Though need to decide on the electives I will be taking next semester soon.

Wednesday, November 26, 2008

Going home

Exams got over yesterday. Though a presentation is remaining. Leaving for home on day after tommorrow.

Sunday, November 23, 2008

Super PC's now availaible off-the-shelf

It seems that the trend of building supercomputers out of multiple gpu's in one pc is catching on. I really like this idea and now nvidia is offering pre-built and qualified systems on those lines. But yes, they come with Tesla series of cards instead of the cheaper geforce cards.

Come on AMD, get your OpenCL implementation working so that we can enjoy much faster super-pc's.

Saturday, November 22, 2008

Good texture compression review

I cam across this very nice review of texture compression. He covers the need for texture compression, and the typical techniques used for texture compression, their pros and cons. Though there are a few spelling mistakes here and there and he builds his case for texture compression around AGP, an extinct bus. All in all, a nice review to aid your understanding of texture compression.

Tuesday, November 18, 2008

AMD's FASTRA now possible

I have written before on how better tools are needed on the AMD side to do gpgpu stuff, in some detail. I had particularly pointed out the need for their drivers to support 4 4870x2 cards in a PC as it makes for a nice 9.6T toaster.

This announcement certainly means that they are stepping up their efforts, though a true C compiler remains in the foggy future. Further, this thread says that they expect drivers to recognize them (the four cards that is) allright.

I saw a few people eager to make such a toaster. Well, with that announcement, good luck guys.

And this suggests that a C-to-IL compiler may not be so far into the future either. Well, good luck guys, give us a C compiler quickly.

Friday, November 14, 2008

India on the moon

Now there's a tricolour on the moon. Great job, ISRO.

Tuesday, November 11, 2008

Multithreading in Python, revisited

I wrote about multithreading support in Python earlier.

Turns out that while python has full support for threads, the CPython interpreter will interpret Python bytecode only one thread at a time, enforcing de-facto serialization of your parallel code.

Come to think of it, it is the worst possible multithreading solution. You get all the problems of parallel programming, and no speed benefit at all. Though could be useful for prototyping alone. It is definitely a bad idea for production code though.

Sunday, November 9, 2008

Multithreading and Python

Came across this fantastic review of various Python VMs out there. Of these, Stackless Python caught my eye particularly.

Native concurrency support. No GIL issues. Point to point message passing for free.

Not bad. Not bad at all.

I have written before on my desire of writing multithreaded code. Due to other demands on my time, that project is still stuck in limbo. But I realized that it will take a lot of work (aka lots of infrastructure code) to get it done. So, naturally, I turned to Python, because

The best code you write is the code that you don't write at all.

Turns out that while python has full support for threads, the CPython interpreter will interpret Python bytecode only one thread at a time, enforcing de-facto serialization of your parallel code.

Not good.

But then you can call multiple extensions from the main thread and have them release the GIL. And after this, extending the interpreter doesn't appear so difficult. A quick google search later I landed here, which shows just how easy it is for all your extensions to release the GIL before launching and re-acquiring it before returning to Python land.

Will take a shot at this way if time permits. I have a few ideas on what to do. But I am not abandoning the classical interpreter for Stackless python just yet. Need to figure out how nicely Stackless Python plays with C extensions before I do that. And of course, there needs to be a fedora package. There is simply no question of compiling the whole thing myself.

Saturday, November 8, 2008

India on Moon

It's done successfully. Our baby is in the lunar orbit. Congratulations, ISRO, you did a great job.

Tuesday, October 28, 2008

Finally, I achieve closure

Boy, this has been a ride beyond any of my dreams.

It was a huge pain to get it to work. I contemplated many times (and deeply) about abandoning the whole effort. But some irrepressible part of me kept me working on it.

Somehow, it was resuscitated. The gods smiled on me and it worked.

But there was a twist in the tale. And then it threw a mighty fit. It definitely was the lowest point in my entire saga (so far that is). I had actually abandoned it (formally at least). Only God knows why I invested more efforts into it. I have no recollection what pushed me into chasing it.

Somehow, that was achieved. You are a life saver, valgrind. But the overhead was still too much. 65x more than what it should have been. 6x more thn the useful work itself. Simply unacceptable.

But, finally, today, on the auspicious day of Diwali. I declare it done. It goes without saying that I am very pleased with the overall results.

It's done, completely.

Only thing left is merging it into master.

As for how the overhead was eliminated, have a look here. Funny, isn't it?

Today, I can confidently say that this is the most complex bit of engineering ever attempted by me. And I have accomplished my goals set before beginning it. I spent time and effort even in stuff like API orthogonalization. Definitely, my biggest achievement yet.

Happy Diwali

Happy Deepawali every one!!!

Sunday, October 26, 2008

Finally, at home

At last, I reach home today. The entire journey turned out be a longer detour than I had expected. But finally, I am at home now.

Thursday, October 23, 2008

Home

Going home tommorrow. Lots of submissions pending now.

So, slog on.

It will take some doing to get this thing done. May be even a night out. And not to mention that I have done no packing so far.

Interesting 16 hours ahead.

Tuesday, October 14, 2008

Framewave

Nice thing, this framewave library. I have felt for a long time now that it suits my needs just fine. Vectorized, multithreaded, opensource library. Now there is a package in ubuntu for this. But not in fedora yet

:(

I may need to use it soon. Feel like switching over to intrepid when it is released. They have a mayavi2 package as well. Really feeling like switching over now.

Tuesday, October 7, 2008

Good luck

I hope they succeed at this. Really don't want Intel or nVidia as a monopoly.............

Friday, October 3, 2008

Tuesday, September 30, 2008

What an idea!!!

Now that's called an idea. Call them crazy, but hey this guy wants a shot at it. And who knows, he might succeed.

Monday, September 29, 2008

WTF?

It's not working. After working fine, giving right results, even allowing itself to be benchmarked, and being reported as correct, it has kicked up a huge fit. There seems to be a memory bug somewhere. Chasing it led to STL of c++ and glibc. That bug hunt story is as surprising as it is rewarding (from a learning pov, and not from a productivity one). Then I came across valgrind.

Thank god it exists. It's literally manna from heaven.

There seems to be a bug in the lookup table for the map between position and bit offset. I have tried removing the alignment requirements, trying both the memset and the manual set versions, but it stubbornly says that there is a small error somewhere here. All errors from my shared binary point to its involvement.

Need to sleep on it somewhat. Doesn't appear to be a big bug at this time, but will be hard to find. Hopefully it's the only one.

Friday, September 26, 2008

Hurray

It's working. That's a huge huge satisfaction. Considering that it was only yesterday that I was down in the dumps, the recovery seems miraculous some 20 hours later.

However, as always, the devil lies in the details. Blind run for production sized samples led to a 3x slowdown.

Yup. That's correct. All this effort. All this pain. All the sweating, thinking, toiling, fretting, praying for a 3x slowdown. I didn't sign up for this.

On deeper inspection, the searches are at least 2x faster. PRNG is 5x faster. Then what the hell is up with it?

Turns out, the overhead in Python/c++ transition is too much. Not much if you are going to amortize it over large runs. But if the underlying code is not going to do much, you are dead. So, some automation is in order. Further, I need to add prefetching hints to it. I think at least some latency of memory access can be hidden safely behind the vector operations. And I just realized after running it, there are fundamental limits to speed up that may be obtained with my cache aware optimizations.

I am gonna go out on a limb by saying this. I think we are approaching the fundamental limits of this method. But then, I have said before that I am out of optimizations for this. And of course, as always,

THERE AIN'T ANYTHING SUCH AS THE FASTEST CODE.

In short, the real bottleneck is between the keyboard and the chair, not between the motherboard and the cooler.

Then why say so? It's my gut feeling. Would love to be proven wrong. The more the margin, the better.

Wednesday, September 24, 2008

When is enough, enough?

Swig is working. My new cache aware data structure is working. Segmentation faults have been removed. It seems faster too. Good news, you may ask?

Unfortunately no. When my code was working, I created a new branch with git and left the old one in master. Something seems to have gone wrong meanwhile. Today I just checked out the master branch and it was the same as my new branch.

This is bad.

Very bad.

Anyway, what I was trying to do was that I was trying to add vectorization. Not working. No idea why. The portions I actually vectorized are reporting correct results. Naturally, other parts also got touched as I was trying to convert my data to 16 byte aligned AoS form. Now, I have no idea why it's not working. Meanwhile I wanted to go back to the older, working, scalar version.

And now, it's gone.

God knows how much I struggled to get vector multiply working using only SSE2 intrinsics. There is a direct instruction in penryn class CPUs. It turs out that SSE3 wasn't so useful after all since it had mainly floating point intrinsics.

:(

Can I have some divine intervention please?

Wednesday, September 17, 2008

Segmentation Fault

I have some good news and some bad news. Good news is that SWIG driven C++/Python combination is working fine.

The bad news is that this attempt of mine also happens to be the time I have jumped headlong into using pointers. I have used them before, but only in small amounts, where the code was well understood and in working state and even then, they were only introduced as part of optimizations. So now all their nastiness is being exposed to me. I am getting segmentation faults at seemingly random places. An example.

I am using a file and when I am done using it, I set it to NULL. Further, I was closing the file in the destructor as an added precaution as well. So it led to an attempt to

fclose(filePtr);//filePtr is NULL when this is called

This was causing a segmentation fault. I had no idea that you can't close a NULL file pointer. Now I have dropped this call altogether. I am still getting segmentation faults in seemingly random places. I can't use gdb to debug it either. (I don't know how). Segmentation faults are supposed to be the easiest ones to find. You just run them in a debugger and it points to the offending location for you. But, no such luck for me. So bottom line, code on.

Saturday, September 13, 2008

Updates

Exams finished yesterday. I have a some good news and some disappointing news. The good news is that I got swig to almost work. Just two small routines left to write and then I should be on my way to C/C++ and Python nirvana. Big deal, you may very well ask.

It's a very big deal for me because I have an established track record of getting stuck in so called one-time-tasks. You just do them once in your life, sort of foundation stone laying ceremony for something. They bite me especially. Rest of the folks would just do it and forget all about it. Not me though. Anyway the good news is that right now the stuff seems to be working fine.

The disappointing news is that 4870x2 is not supported by AMD sdk. WHY??? Of all stupidities, why this one. It's the fastest card in your line up for christ's sake. Poor guy.

Checked out the v1.2 of AMD's Stream SDK. It was horrible. Again. Brook+ now supports ints, but their FP matrix multiply routine still uses floats as counters. While their int based matrix multiply routine uses ints as counters. Not much improvements in docs. Why haven't they made even this simple a change? New features such as compute shaders, local data share are exposedd only in CAL but not in Brook+. It seems that AMD is focussing their efforts on building CAL as a reliable foundation for future releases (aka OpenCL and DX11 compute shaders) and has totally ditched adding new features in Brook+.

Saturday, September 6, 2008

India's Breakout Moment

Ladies and gentlemen, India's breakout moment has arrived. It's fitting that it arrives less than a month after Beijing Olympics. It's a very happy feeling that we are sitting at the world's highest tables today. In 1998, we were abused, screamed at, called names, sought to be punished (with economic sanctions). in 2008, we are a fully paid up lifetime member of the very same club. It is fitting that the country that built an technology jail for us has torn it down after 34 years.

In this hypocrite world, the only language that is understood is the language of power. As has been said very eloquently, samrath ko nahin dos gusain. In english, the powerful are not capable of doing any wrongs. In 1998, we seen as a danger to world peace and stability. In 2008, nobody dared cross our path as we rewrote a 40 year old treaty signed by the ~190 countries on our terms. The icing on the cake is that we are still out of that treaty!. What's the difference between then and now? India's march ahead is seen as inevitable. And when a 1.2 billion heavy elephant puts on weight and starts building up momentum, people think twice before getting in the way.

As for the non-proliferation pricks/ayatollahs/hypocrites, NPT has been blown apart from inside, not outside. The biggest dangers to it lie inside. China, North Korea, Iran have systematically shredded it.

After following insane policies for 40 odd years, we are now well on our way to achieve our rightful place in the world. Sure, we have almost infinite capacity to screw it all up, but still, this gives me renowned hope that in my lifetime atleast, we will be able to stand up to anybody else in the world with pride.

I hope this turns out to be our inflection point. Watch out world, India is about to gatecrash your party (on the time scale of couple of decades that is). In it's own style.

Wednesday, September 3, 2008

Exams

Exams. They are here. Again. Gotta study. I wish there were less courses and more time to pursue your own interests here in IIT. But anyway, I really need to focus on my studies now. I can't keep cribbing and neglect my studies meanwhile. Though I haven't really thrown myself into studies headlong yet. :)

Saturday, August 30, 2008

Wish 3

It's official. AMD's new SDK for gpu's is coming in two weeks. I just hope they do a better job this time. I downloaded their docs for their current release and went through them just to get a feel for what's their hardware/software platform like.

It was horrible.

Docs were alpha grade. They actually have a full blown version of their sdk meant for you to code in assembly. That's not a typo. Gawwd......, assembly, in 2008. I have had my share of writing assembly for a lifetime. We had a course in microprocessors where we had to write in assembly, hand assemble it and then punch it in hex. I hated it and I am done doing it. I think is going to be a while before I even contemplate writing assembly even for the innermost loops.

Brook+ is a disaster. Ok, may be not a disaster but still I don't feel that it is the right way to go about it. It's foundations were laid in 2004, when men were men and wrote DirectX/OpenGL shaders to multiply matrices. It was meant to allow folks to write portable shaders without asking them to learn the grpahics API first. Brook+ looks like that, and acts like that too. Even today, Brook+ compiles to C++ before it finally compiles to machine code. I dont think it is the right tool on 2008.

My hunch is that nVidia beat them to the punch with CUDA and they were forced to respond. In a hurry they dusted off whatever they could find and pushed it out after doing some renovation. It doesn't support integers and bitwise ops and they are forced to use floats as counters. What does that tell you about the maturity of their tool chain? However, their announcement of IL (aka ptx for AMD) indicates that now they have a solid base to build on. I must admit I really liked the architecture of AMD gpu's over nVidia GPU's and I hope this poor soul is able to achieve his dreams. Not to mention that one can get 2.4Tflops per card from AMD ;)

Bottomline, a few things are needed before it can be considered a serious competitor to CUDA.

1) Better docs. Absolutely the first thing they need to do. More detailed docs, explaining the hardwre naturally with lots of in-docs code samples. Having small bits of code explain the stuff to you right next to theory really helps.

2) A real C compiler. No Brook style fluff. No assembly in 2008. Expose the hardware better in the docs so that we know what kind of choices are we making in our code. What lies on chip, what is off chip? What is cached and what is not?

3) More stable drivers. It was said that drivers will not support 8 GPU's even if you could pack them into on PC by using 4 X2 cards. Why? This level of support does not cost them much. The FASTRA guys achieved enormous amount of PR goodwill for nvidia. This kind of good news really gets attention from those who are serious about writing high speed stuff for your platform. AMD stands to gain a lot of (much needed) dev attention if it can demo a 9.6T system and gone 1 up on CUDA which has been getting a lot of developer attention. [FASTRA only does 4T at the max :( ]

4) And yes, let users figure out if they want a particular GPU to be used for graphics or not. Sometimes integrated graphics are enough as in this case. The consumer bought it, he should have the right to decide whether he wants to contribute to global warming by playing games/folding@home/his own compute stuff on it.

Wednesday, August 20, 2008

Some Good News

Just installed CPU-Z. The results are very good news for what I am trying to do. My CPU supports SSE3 and has a cacheline size of 64 bytes to boot! Though I am disappointed that /proc/cpuinfo didn't show me that. May be I need to check a few things here and there to be sure of what's up with /proc/cpuinfo. I just wish I had more time to do it. I am really itching to have a go at it. C++ skullduggery has been done. Now just need to run swig and (pray to God it works out). If I can implement all the ideas I have in mind, this would be really something for me to be proud of.

Greatly looking forward to it.

Friday, August 15, 2008

Happy Birthday

Happy birthday, India

Wishing you many happy returns of the same

Wednesday, August 13, 2008

A more efficient data structure

I need to implement a faster version of BFS. I am looking for a new representation for the graph. I am going to assume that the cacheline size is 64 bytes on my Turion 64 X2 processor. It should lead to higher reference of locality and much better packing than before. And if my assumption of cache line size turns out right, it's going to be a big plus.

I have a new idea in mind, but I need to figure out the nuts and bolts for it. Hopefully should be done soon.

Monday, August 11, 2008

Hurray

Yaaaaaaaaaaaaaaaaaaaaaayyyyyyyyyyyyy!!!!!!!!!!!!!!!!!
Whhhhhhhhhooooooooooooooooooooooooooooo
HO-HO-HO-HO-HO-HO-HO-HO-HO-HO-HO-HO-HO-HO

Keep the party going guys, Best of Luck

Tuesday, July 29, 2008

Terror in India

Over the past few days, terror has really gripped India. Today, bombs have been uncovered like mushrooms all over the country. Indian security establishment is undoubtedly pathetic, but hopefully, they'll improve after this (Though the cynic in me tells me that it is next-to-impossible) . On a more sinister note, this strategy of several low-intensity bombs going off in multiple cities in quick succession seems to be generating more terror while being taking less casualties than previous practice of detonating a few powerful bombs in a city and then lying low for a while.

Monday, July 28, 2008

In Mumbai

I am back in Mumbai and it will take a couple of days for life to normalize here. It's been raining cats and dogs all day today. Looks a little better right now. Internet here is somewhat better than back home, though it is much better at off hours.

There are a couple of things that I must get done in the next few days. One of them is to make linuxdcpp run without shutting down the firewall (had to do it openSuSE 10.2, :-( ). It runs fine on F8, so should be no problem on F9.

Hopefully, the rains will abate soon.

Friday, July 25, 2008

Wish 2

Can't tell you how much I would love fully featured open source drivers from the GPU guys, for their latest (even if not greatest :)) 3D cards for linux. Right now, I am writing this on a laptop which has integrated nvidia GPU. It's Geforce 6150 Go. Decent, but not great. But hey, it's pretty good for my budget.

The only practical problem with the nVidia drivers is that whenever I shut down X, the whole screen goes garbled and starts displaying weird colors. Thanks to the Ctrl-Alt-F6/7 trick, I am able to restore it back to normal, but it shows the lack of maturity of the drivers. The problems associated with trying to reach nVidia to resolve issues with their proprietary drivers are legendary.

I just hope that the nouveau guys can get 3D up and running ASAP. AMD/ATI has done a very good job here by providing docs so that we can write our own drivers. Right now, I am pretty sure that my next computer will certainly have intel or ATi graphics in it. Besides I am secretly hoping that intel is able to resist the temptation to go proprietary only for Larrabee.

The real big push in the driver space is going to come because the GPGPU guys (like me) want more flexibility in the drivers. It is a very important piece in the software stack and the features provided by the present day drivers are simply not enough if GPGPU has to live up to its true potential. Want an example? There are many.

1) I want the data to directly DMA'ed to/from GPU memory instead of being streamed via system RAM.

2) I want the GPU handling API to signal a condition variable (pthreads).

3) I want the GPU to absolutely not perform any graphics tasks and instead leave them to the integrated graphics.

4) Related to 2. Anybody who's going to the trouble of porting stuff to GPU's obviously wants his CPU to to be maxed out as well. With parallel solutions dime a dozen, he would obviously want some support for his solution from API, as in 2 above.

While I don't expect even the upcoming open-source ATI drivers to have features wished for by me, their open source credentials ensure that I can hack them myself if I want to.

I just hope that the RadeonHD driver team is able to deliver quickly and I am praying that nouveau team succeeds as well.

Wish 1

I was myself surprised when I realized how bad service I was getting when came back home. It is really horrible. Ok, speeds haven't improved but the reliability was much better earlier. 256k seems awfully slow after experiencing first world internet access in Germany. When you serve that with a service that gives at best 10 minutes of continuous access, I would throw up any time.

Our situation in wired internet is not going to improve until we start enforcing unbundling. We need competition to improve our market situation. Internet is better in Europe than in USA because of unbundling. I am afraid that's not going to happen any time soon here.

Today, I can comfortably say that mobile phone has become a necessity. As a country we need to take steps to make broadband internet the next necessity. Our unique status gives us an opportunity to bypass 3G networks entirely and straight away build 4G networks today. True, LTE isn't availaible today, but then we shouldnt wait for it's standardization either. We need to start deploying WiMax now.

Unfortunately, that's not what we are doing today.

Wednesday, July 23, 2008

Tech Wishlist

Here's my wishlist, more or less in descending order. After all, I can dream, can't I?

1. Better broadband service from MTNL. Right now, you can rest assured that you will get no more than 10 minutes of continuous access interspersed with an hour's down time.

2. Fully featured open source drivers for graphics cards from nVidia and AMD (the AMD side should come true soon).

3. Better developer tools for their GPU's for GPGPU purposes from AMD.

4. Cheaper FPGA dev kits with higher gate densities.

5. User assemble-able laptops just like PC's.

6. Google opening one of it's many data centres in India.

7. Completely hackable RSX on PS3 with linux.

8. Hobbyist hackable soc's like nvidia's APX 2500. User moddable devices that use them will do too. (in fact they are preferable)

I will post details about my wishes, and the why of my wishlist, in upcoming posts.

Monday, July 21, 2008

Back Home

I am back in India and it feels great to be home, even though it's only for a short while. Time to catch up with friends. Multi-threading will have to wait a little, I guess.

Saturday, July 19, 2008

Last Post

This is my last post from Germany. Can't wait to get back to home. Multithreading is also coming along well. But I realize that I will need to start small. Looking forward to it.

Friday, July 18, 2008

Pack up

Now all my stuff here is over. I do have some time here left to take a shot at multi threading. Good news is, yesterday, I tried an example (though introductory) and it worked. First things first, need to convert it to C++ from pure C.

Thursday, July 17, 2008

Time to go

Well, now the time to pack bags and go home is approaching. Though I may be able to get started with my multi-threading attempt. Looking forward to it.

Multi-threaded Programming

I came across a very good tutorial to pthreads library. It is a really simple explanation with a few well-written and well-commented examples which show you the concepts. I now think I have enough to take a crack(my first) at multi-threaded programming.

I have a design paradigm in mind to solve the nasty things associated with parallel programming. I am pretty sure that I can solve the easy ones like WAW,RAW,WAR hazards with it. I am reasonably sure that it leads to no deadlocks and live-locks either. Resource starvation? priority inversion? Hopefully not. Race condition? That's a problem that will have to be solved by careful design. But I guess there's only one way to find out.

Code it.

Wednesday, July 16, 2008

Learnig Haskell

I have recently taken to this language. Seems interesting. Functional programming is a very natural style to solve certain problems. Besides, it has certain other useful benefits such as having no side effects. There only problem is, there is only one way to learn a new language,however.

You have to write a non-trivial program in it. I on the other hand hate doing anything which seems remotely repetitive. Now that I think of it, I learnt C++ while aiming to write a good calculator program. I ultimately wrote it, but it was hardly fit for day to day use. I learnt Python after falling in love with it and then beginning to use it in my data parsing scripts. FORTRAN, ha, I just somehow hammered it it in to my skull.

I am really looking forward to solving a non-trivial problem in Haskell. Sure, I'll get lots of tutorials, books for it. But I think I am not going to get it unless I write something of moderate size in Haskell, and not by just transcribing C to Haskell.

Looking forward to that problem.

My First Post

Hi all,

This is my first post here. Though not my first attempt at maintaining a regular blog. I hope I will be able to keep it up this time though.