mbox series

[v2,0/2] mailbox: introduce STMicroelectronics STM32 IPCC driver

Message ID 1520852307-26659-1-git-send-email-fabien.dessenne@st.com
Headers show
Series mailbox: introduce STMicroelectronics STM32 IPCC driver | expand

Message

Fabien DESSENNE March 12, 2018, 10:58 a.m. UTC
The STMicroelectronics STM32 Inter-Processor Communication Controller
(IPCC) is used for communicating data between two processors.
It provides a non blocking signaling mechanism to post and retrieve
communication data in an atomic way.

Changes since v2:
- update bindings and driver according to Rob's comments:
    - change compatible property to "st,stm32mp1-ipcc"
    - change "st,proc_id" property to "st,proc-id"
    - define all interrupts as mandatory

Fabien Dessenne (2):
  dt-bindings: mailbox: add STMicroelectronics STM32 IPCC binding
  mailbox: add STMicroelectronics STM32 IPCC driver

 .../devicetree/bindings/mailbox/stm32-ipcc.txt     |  47 +++
 drivers/mailbox/Kconfig                            |   8 +
 drivers/mailbox/Makefile                           |   2 +
 drivers/mailbox/stm32-ipcc.c                       | 403 +++++++++++++++++++++
 4 files changed, 460 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/mailbox/stm32-ipcc.txt
 create mode 100644 drivers/mailbox/stm32-ipcc.c

Comments

Jassi Brar April 5, 2018, 9:38 a.m. UTC | #1
On Mon, Mar 12, 2018 at 4:28 PM, Fabien Dessenne <fabien.dessenne@st.com> wrote:
....
> +
> +       /* irq */
> +       for (i = 0; i < IPCC_IRQ_NUM; i++) {
> +               ipcc->irqs[i] = of_irq_get_byname(dev->of_node, irq_name[i]);
> +               if (ipcc->irqs[i] < 0) {
> +                       dev_err(dev, "no IRQ specified %s\n", irq_name[i]);
> +                       ret = ipcc->irqs[i];
> +                       goto err_clk;
> +               }
> +
> +               ret = devm_request_threaded_irq(dev, ipcc->irqs[i], NULL,
> +                                               irq_thread[i], IRQF_ONESHOT,
> +                                               dev_name(dev), ipcc);
>
In your interrupt handlers you don't do anything that could block.
Threads only adds some delay to your message handling.
So maybe use devm_request_irq() ?

.......
> +
> +static struct platform_driver stm32_ipcc_driver = {
> +       .driver = {
> +               .name = "stm32-ipcc",
> +               .owner = THIS_MODULE,
>
No need of owner here these days.
And also maybe use readl/writel, instead of _relaxed.

Cheers!
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jassi Brar April 6, 2018, 12:56 p.m. UTC | #2
On Fri, Apr 6, 2018 at 5:59 PM, Fabien DESSENNE <fabien.dessenne@st.com> wrote:
> Hi
>
>
> On 05/04/18 11:38, Jassi Brar wrote:
>> On Mon, Mar 12, 2018 at 4:28 PM, Fabien Dessenne <fabien.dessenne@st.com> wrote:
>> ....
>>> +
>>> +       /* irq */
>>> +       for (i = 0; i < IPCC_IRQ_NUM; i++) {
>>> +               ipcc->irqs[i] = of_irq_get_byname(dev->of_node, irq_name[i]);
>>> +               if (ipcc->irqs[i] < 0) {
>>> +                       dev_err(dev, "no IRQ specified %s\n", irq_name[i]);
>>> +                       ret = ipcc->irqs[i];
>>> +                       goto err_clk;
>>> +               }
>>> +
>>> +               ret = devm_request_threaded_irq(dev, ipcc->irqs[i], NULL,
>>> +                                               irq_thread[i], IRQF_ONESHOT,
>>> +                                               dev_name(dev), ipcc);
>>>
>> In your interrupt handlers you don't do anything that could block.
>> Threads only adds some delay to your message handling.
>> So maybe use devm_request_irq() ?
>
> The interrupt handlers call mbox_chan_received_data() /
> mbox_chan_txdone(), which call in turn client's rx_callback() /
> tx_done() / tx_prepare() which behavior may be unsafe. Hence, using a
> threaded irq here seems to be a good choice.
>
rx_callback() is supposed to be atomic. So was tx_done() but some
platforms needed preparing for the message to be sent. Your client is
not going to be used by other platforms or even over other
controllers, so if your prepare is NULL/atomic, you should assume
tx_done to be atomic and not lose performace. If time comes to fix it,
we'll move prepare() out of the atomic path.


>> .......
>>> +
>>> +static struct platform_driver stm32_ipcc_driver = {
>>> +       .driver = {
>>> +               .name = "stm32-ipcc",
>>> +               .owner = THIS_MODULE,
>>>
>> No need of owner here these days.
>
> OK, I will suppress it.
>
>> And also maybe use readl/writel, instead of _relaxed.
>
> The IPCC device is exclusively used on ARM. In ARM architecture, the
> ioremap on devices are strictly ordered and uncached.
> In that case, using _relaxed avoids an unneeded cache flush, slightly
> improving performance.
>
Its not the portability, but that the impact is negligible in favor of
_relaxed() version when all you do is just program some registers and
not heavy duty i/o. But I am ok either way.  You'd gain far more
performance handling irqs in non-threaded manner :)

Cheers!
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Fabien DESSENNE April 6, 2018, 3:05 p.m. UTC | #3
On 06/04/18 14:56, Jassi Brar wrote:
> On Fri, Apr 6, 2018 at 5:59 PM, Fabien DESSENNE <fabien.dessenne@st.com> wrote:
>> Hi
>>
>>
>> On 05/04/18 11:38, Jassi Brar wrote:
>>> On Mon, Mar 12, 2018 at 4:28 PM, Fabien Dessenne <fabien.dessenne@st.com> wrote:
>>> ....
>>>> +
>>>> +       /* irq */
>>>> +       for (i = 0; i < IPCC_IRQ_NUM; i++) {
>>>> +               ipcc->irqs[i] = of_irq_get_byname(dev->of_node, irq_name[i]);
>>>> +               if (ipcc->irqs[i] < 0) {
>>>> +                       dev_err(dev, "no IRQ specified %s\n", irq_name[i]);
>>>> +                       ret = ipcc->irqs[i];
>>>> +                       goto err_clk;
>>>> +               }
>>>> +
>>>> +               ret = devm_request_threaded_irq(dev, ipcc->irqs[i], NULL,
>>>> +                                               irq_thread[i], IRQF_ONESHOT,
>>>> +                                               dev_name(dev), ipcc);
>>>>
>>> In your interrupt handlers you don't do anything that could block.
>>> Threads only adds some delay to your message handling.
>>> So maybe use devm_request_irq() ?
>> The interrupt handlers call mbox_chan_received_data() /
>> mbox_chan_txdone(), which call in turn client's rx_callback() /
>> tx_done() / tx_prepare() which behavior may be unsafe. Hence, using a
>> threaded irq here seems to be a good choice.
>>
> rx_callback() is supposed to be atomic.

I am worried with this atomic part (and honestly I did not note that the 
callbacks were expected to be)

In my case, remoteproc->virtio->rpmsg is the mailbox client defining the 
rx_callback.
If I follow your suggestion, I shall make this rx_callback Atomic in 
remoteproc (or in virtio or rpmsg). And this does not seem to be so 
simple (add a worker in the middle of somewhere?). Bjorn, feel free to 
comment this part.

An alternate implementation consists in using a threaded IRQ for the 
mailbox interrupt.
This option is not only simple, but also ensures to split bottom & half 
parts at the irq level which is IMHO a general good practice.

I can see that some mailbox clients implement callbacks that are NOT 
atomic and I suspect this is the reason why some mailbox drivers use 
threaded_irq (rockchip mailbox splits the bottom & half parts).

Would it be acceptable to consider the "atomic client callback" as a 
non-strict rule ?

>   So was tx_done() but some
> platforms needed preparing for the message to be sent. Your client is
> not going to be used by other platforms or even over other
> controllers, so if your prepare is NULL/atomic, you should assume
> tx_done to be atomic and not lose performace. If time comes to fix it,
> we'll move prepare() out of the atomic path.
>
>
>>> .......
>>>> +
>>>> +static struct platform_driver stm32_ipcc_driver = {
>>>> +       .driver = {
>>>> +               .name = "stm32-ipcc",
>>>> +               .owner = THIS_MODULE,
>>>>
>>> No need of owner here these days.
>> OK, I will suppress it.
>>
>>> And also maybe use readl/writel, instead of _relaxed.
>> The IPCC device is exclusively used on ARM. In ARM architecture, the
>> ioremap on devices are strictly ordered and uncached.
>> In that case, using _relaxed avoids an unneeded cache flush, slightly
>> improving performance.
>>
> Its not the portability, but that the impact is negligible in favor of
> _relaxed() version when all you do is just program some registers and
> not heavy duty i/o. But I am ok either way.  You'd gain far more
> performance handling irqs in non-threaded manner :)
>
> Cheers!
Jassi Brar April 6, 2018, 4:20 p.m. UTC | #4
On Fri, Apr 6, 2018 at 8:35 PM, Fabien DESSENNE <fabien.dessenne@st.com> wrote:
>
> On 06/04/18 14:56, Jassi Brar wrote:
>> On Fri, Apr 6, 2018 at 5:59 PM, Fabien DESSENNE <fabien.dessenne@st.com> wrote:
>>> Hi
>>>
>>>
>>> On 05/04/18 11:38, Jassi Brar wrote:
>>>> On Mon, Mar 12, 2018 at 4:28 PM, Fabien Dessenne <fabien.dessenne@st.com> wrote:
>>>> ....
>>>>> +
>>>>> +       /* irq */
>>>>> +       for (i = 0; i < IPCC_IRQ_NUM; i++) {
>>>>> +               ipcc->irqs[i] = of_irq_get_byname(dev->of_node, irq_name[i]);
>>>>> +               if (ipcc->irqs[i] < 0) {
>>>>> +                       dev_err(dev, "no IRQ specified %s\n", irq_name[i]);
>>>>> +                       ret = ipcc->irqs[i];
>>>>> +                       goto err_clk;
>>>>> +               }
>>>>> +
>>>>> +               ret = devm_request_threaded_irq(dev, ipcc->irqs[i], NULL,
>>>>> +                                               irq_thread[i], IRQF_ONESHOT,
>>>>> +                                               dev_name(dev), ipcc);
>>>>>
>>>> In your interrupt handlers you don't do anything that could block.
>>>> Threads only adds some delay to your message handling.
>>>> So maybe use devm_request_irq() ?
>>> The interrupt handlers call mbox_chan_received_data() /
>>> mbox_chan_txdone(), which call in turn client's rx_callback() /
>>> tx_done() / tx_prepare() which behavior may be unsafe. Hence, using a
>>> threaded irq here seems to be a good choice.
>>>
>> rx_callback() is supposed to be atomic.
>
> I am worried with this atomic part (and honestly I did not note that the
> callbacks were expected to be)
>
> In my case, remoteproc->virtio->rpmsg is the mailbox client defining the
> rx_callback.
> If I follow your suggestion, I shall make this rx_callback Atomic in
> remoteproc (or in virtio or rpmsg). And this does not seem to be so
> simple (add a worker in the middle of somewhere?). Bjorn, feel free to
> comment this part.
>
> An alternate implementation consists in using a threaded IRQ for the
> mailbox interrupt.
> This option is not only simple, but also ensures to split bottom & half
> parts at the irq level which is IMHO a general good practice.
>
> I can see that some mailbox clients implement callbacks that are NOT
> atomic and I suspect this is the reason why some mailbox drivers use
> threaded_irq (rockchip mailbox splits the bottom & half parts).
>
> Would it be acceptable to consider the "atomic client callback" as a
> non-strict rule ?
>
Of course you can traverse atomic path from sleepable context (but not
vice-versa).
Please send in the final revision.

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Fabien DESSENNE April 9, 2018, 9:03 a.m. UTC | #5
On 06/04/18 18:20, Jassi Brar wrote:
> On Fri, Apr 6, 2018 at 8:35 PM, Fabien DESSENNE <fabien.dessenne@st.com> wrote:
>> On 06/04/18 14:56, Jassi Brar wrote:
>>> On Fri, Apr 6, 2018 at 5:59 PM, Fabien DESSENNE <fabien.dessenne@st.com> wrote:
>>>> Hi
>>>>
>>>>
>>>> On 05/04/18 11:38, Jassi Brar wrote:
>>>>> On Mon, Mar 12, 2018 at 4:28 PM, Fabien Dessenne <fabien.dessenne@st.com> wrote:
>>>>> ....
>>>>>> +
>>>>>> +       /* irq */
>>>>>> +       for (i = 0; i < IPCC_IRQ_NUM; i++) {
>>>>>> +               ipcc->irqs[i] = of_irq_get_byname(dev->of_node, irq_name[i]);
>>>>>> +               if (ipcc->irqs[i] < 0) {
>>>>>> +                       dev_err(dev, "no IRQ specified %s\n", irq_name[i]);
>>>>>> +                       ret = ipcc->irqs[i];
>>>>>> +                       goto err_clk;
>>>>>> +               }
>>>>>> +
>>>>>> +               ret = devm_request_threaded_irq(dev, ipcc->irqs[i], NULL,
>>>>>> +                                               irq_thread[i], IRQF_ONESHOT,
>>>>>> +                                               dev_name(dev), ipcc);
>>>>>>
>>>>> In your interrupt handlers you don't do anything that could block.
>>>>> Threads only adds some delay to your message handling.
>>>>> So maybe use devm_request_irq() ?
>>>> The interrupt handlers call mbox_chan_received_data() /
>>>> mbox_chan_txdone(), which call in turn client's rx_callback() /
>>>> tx_done() / tx_prepare() which behavior may be unsafe. Hence, using a
>>>> threaded irq here seems to be a good choice.
>>>>
>>> rx_callback() is supposed to be atomic.
>> I am worried with this atomic part (and honestly I did not note that the
>> callbacks were expected to be)
>>
>> In my case, remoteproc->virtio->rpmsg is the mailbox client defining the
>> rx_callback.
>> If I follow your suggestion, I shall make this rx_callback Atomic in
>> remoteproc (or in virtio or rpmsg). And this does not seem to be so
>> simple (add a worker in the middle of somewhere?). Bjorn, feel free to
>> comment this part.
>>
>> An alternate implementation consists in using a threaded IRQ for the
>> mailbox interrupt.
>> This option is not only simple, but also ensures to split bottom & half
>> parts at the irq level which is IMHO a general good practice.
>>
>> I can see that some mailbox clients implement callbacks that are NOT
>> atomic and I suspect this is the reason why some mailbox drivers use
>> threaded_irq (rockchip mailbox splits the bottom & half parts).
>>
>> Would it be acceptable to consider the "atomic client callback" as a
>> non-strict rule ?
>>
> Of course you can traverse atomic path from sleepable context (but not
> vice-versa).

So, to be sure we understand each other, I can use threaded_irq, right?

> Please send in the final revision.
>
> Thanks.
Jassi Brar April 9, 2018, 10:10 a.m. UTC | #6
On Mon, Apr 9, 2018 at 2:33 PM, Fabien DESSENNE <fabien.dessenne@st.com> wrote:
>
>
> On 06/04/18 18:20, Jassi Brar wrote:
>> On Fri, Apr 6, 2018 at 8:35 PM, Fabien DESSENNE <fabien.dessenne@st.com> wrote:
>>> On 06/04/18 14:56, Jassi Brar wrote:
>>>> On Fri, Apr 6, 2018 at 5:59 PM, Fabien DESSENNE <fabien.dessenne@st.com> wrote:
>>>>> Hi
>>>>>
>>>>>
>>>>> On 05/04/18 11:38, Jassi Brar wrote:
>>>>>> On Mon, Mar 12, 2018 at 4:28 PM, Fabien Dessenne <fabien.dessenne@st.com> wrote:
>>>>>> ....
>>>>>>> +
>>>>>>> +       /* irq */
>>>>>>> +       for (i = 0; i < IPCC_IRQ_NUM; i++) {
>>>>>>> +               ipcc->irqs[i] = of_irq_get_byname(dev->of_node, irq_name[i]);
>>>>>>> +               if (ipcc->irqs[i] < 0) {
>>>>>>> +                       dev_err(dev, "no IRQ specified %s\n", irq_name[i]);
>>>>>>> +                       ret = ipcc->irqs[i];
>>>>>>> +                       goto err_clk;
>>>>>>> +               }
>>>>>>> +
>>>>>>> +               ret = devm_request_threaded_irq(dev, ipcc->irqs[i], NULL,
>>>>>>> +                                               irq_thread[i], IRQF_ONESHOT,
>>>>>>> +                                               dev_name(dev), ipcc);
>>>>>>>
>>>>>> In your interrupt handlers you don't do anything that could block.
>>>>>> Threads only adds some delay to your message handling.
>>>>>> So maybe use devm_request_irq() ?
>>>>> The interrupt handlers call mbox_chan_received_data() /
>>>>> mbox_chan_txdone(), which call in turn client's rx_callback() /
>>>>> tx_done() / tx_prepare() which behavior may be unsafe. Hence, using a
>>>>> threaded irq here seems to be a good choice.
>>>>>
>>>> rx_callback() is supposed to be atomic.
>>> I am worried with this atomic part (and honestly I did not note that the
>>> callbacks were expected to be)
>>>
>>> In my case, remoteproc->virtio->rpmsg is the mailbox client defining the
>>> rx_callback.
>>> If I follow your suggestion, I shall make this rx_callback Atomic in
>>> remoteproc (or in virtio or rpmsg). And this does not seem to be so
>>> simple (add a worker in the middle of somewhere?). Bjorn, feel free to
>>> comment this part.
>>>
>>> An alternate implementation consists in using a threaded IRQ for the
>>> mailbox interrupt.
>>> This option is not only simple, but also ensures to split bottom & half
>>> parts at the irq level which is IMHO a general good practice.
>>>
>>> I can see that some mailbox clients implement callbacks that are NOT
>>> atomic and I suspect this is the reason why some mailbox drivers use
>>> threaded_irq (rockchip mailbox splits the bottom & half parts).
>>>
>>> Would it be acceptable to consider the "atomic client callback" as a
>>> non-strict rule ?
>>>
>> Of course you can traverse atomic path from sleepable context (but not
>> vice-versa).
>
> So, to be sure we understand each other, I can use threaded_irq, right?
>
Yes. Its platform specific driver, I can't dictate the features you want :)
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html