Last edit: Peter Favrholdt on August 26, 2006 21:48 (2791 days, 9 hours and 51 minutes ago) (diff)
Rtai.Dk | RecentChanges | Preferences | DIAPM RTAI

This is the ways to write your ISR to prevent it from being preempted:


    //Your actual ISR code goes here....


Sharing IRQ between RTAI and Linux

static struct pci_dev* plx;

static void PLX_linuxPostIrqHandler( int irq, void *dev_id,
                                 struct pt_regs *regs)

static void PLX_irqHandler(void )
    /* Lock scheduling */

    // Do the realtime irq handling if required

    /* The irq is not for me, give to linux */
    if (!myIrq)

    /* Unlock scheduling */

    /* Register irq handler */
    rt_request_global_irq( plx->irq, PLX_irqHandler );
    rt_request_linux_irq( plx->irq, PLX_linuxPostIrqHandler,
                            "LINUX_POST_IRQHANDLER", PLX_linuxPostIrqHandler );
    rt_enable_irq( plx->irq );


    rt_disable_irq( plx->irq );
    rt_free_global_irq( plx->irq );
    rt_free_linux_irq( plx->irq, PLX_linuxPostIrqHandler );


From the mailinglist:
Hi Stephen

First, to give an update on my problem: By using the latency_calibration
program supplied with RTAI, I learned that one of the laptops had a worst
case (maximum) latency of 8 us when no load on the network card and 15 us
when ping flooding the network, hence running with an interrupt frequency of
100 kHz would mean that some are lost. Thus the simple solution for me was
to lower my interrupt frequency to 50 kHz. Now, I experience no losses .

And now to the PIC regrogramming. After I reprogrammed it, I most definately
saw an improment in the amount lost interrupts. A standard PC concists of
two PICs, each with 8 interrupt lines (this is of historic reasons: the old
ibm computers only had one). All interrupts on PIC 2 are cascaded into
interrupt line 2 on PIC1. For priorities this means that the timer (IRQ0)
has priority one, the keyboard (IRQ1) priority 2 and then all IRQs on PIC2,
i.e., IRQ8-IRQ15 has priorities 3-10. The bottom line is that our parallel
port on IRQ7 has the lowest priority. I've depected this here:

The PIC1 can though be reprogrammed to give the parallel port the highest
priority. I did this by consulting the PIC datasheet (a copy is fould here:
http://ftp.penguinppc.org/users/hollis/8259A_PIC_Datasheet.pdf). If you look
at page 16, pay attention to the Specific rotation, this enables you to
change the priorities in the way, you'd like by setting a new bottom
priority (lowest priority), e.g. something like this:

You can find the address of the pic1 by doing: cat /proc/ioports

Hope this helped you.

Also, when writing in this thread, I'd like to thank Paolo and Jan for their

Best Regards
Jeppe Vesterbaek

From the mailinglist:
>as far as I've read through RTAI API documentation, I found no other solution 
>> for implementing uninterruptible regions of code that using rt_disable_irq() 
>> for each irq needed. what are the differences with rt_shutdown_irq()? Does that 
>> function shutdown all irqs? it seems not to me. In the same way, getting out of 
>> uninterruptible regions of code will be a matter of several rt_enable_irq()... 
>> Is that the way to go?

Calling rt_enable/disable_irq() controls a single interrupt source at a
time, enabling/disabling it at the PIC level (and Adeos domain level).
For implementing an interrupt-free section, this is likely not what you
want, since this is costly on x86, and should not be used unless you
really want to shut/reactivate specific IRQ lines. What you need is
likely hard_save_flags_and_cli()/hard_restore_flags() in RTAI parlance,
which control the interrupt mask at CPU level, hence globally for all
IRQs. Have a look at some core RTAI modules, like the scheduler(s) to
see how this works.

rt_shutdown_irq() is usually the same as rt_disable_irq(); it is only
here for symmetry with the Linux/x86 IRQ interface, but is not currently
used by RTAI. You should not rely on it.

Q: RTAI should support priority interrupts. Since a keyboard interrupt has a higher priority than a mouse interrupt, RTAI should preempt the mouse interrupt handler when a keyboard interrupt is received. We have tried to prove this by making a very slow mouse interrupt handler (print message #1, sleep 10 seconds, print message #2). The keyboard interrupt handler (print message) should preempt this mouse interrupt handler. However this isn't happening with our code, the keyboard interrupts are processed after the mouse interrupts. Could anybody explain what we are doing (or thinking) wrong?

A: What's wrong from a RTOS standpoint is to put lengthy processing into ISR contexts. The usual RTOS approach is to delegate such work to tasks synchronized by ISRs; the latter only having to perform the basic hw management and short internal housekeeping chores. This does not mean that there is not interest in having nestable or even prioritized ISRs for some very specific purposes, but if your whole house is built on such pilar, you just end up asking for trouble. It would be better re-enabling interrupts in your interrupt code, since RTAI masks them by default when entering it.

Q: RTAI supports priority inheritance's with use of resource semaphores.

We tried to prove this by showing that a task waiting for a semaphore (which has a lower priority as the task that has the semaphore) gets a higher inheritance priority. We though we could realise this by making multiple tasks with different priorities, a low priority task gets a semaphore first. This task will sleep for a while so other (higher priority) tasks would get a change to wait for the semaphore. And meanwhile print the inheritances priority, with use of rt_get_inher_prio.

A: rt_get_inher_prio doesn't give the priority inherited but the nominal priority like rt_get_prio does. If you want to have the value of the inherited priority, you have to get it in the task structure: my_task.priority.

Edit text of this page | View other revisions | Download Rtai.dk