Message ID | 20191204155912.17590-8-brgl@bgdev.pl |
---|---|
State | New |
Headers | show |
Series | gpiolib: add an ioctl() for monitoring line status changes | expand |
On Wed, Dec 4, 2019 at 6:01 PM Bartosz Golaszewski <brgl@bgdev.pl> wrote: > > From: Bartosz Golaszewski <bgolaszewski@baylibre.com> > > The read_lock mutex is supposed to prevent collisions between reading > and writing to the line event kfifo but it's actually only taken when > the events are being read from it. > > Drop the mutex entirely and reuse the spinlock made available to us in > the waitqueue struct. Take the lock whenever the fifo is modified or > inspected. Drop the call to kfifo_to_user() and instead first extract > the new element from kfifo when the lock is taken and only then pass > it on to the user after the spinlock is released. > My comments below. > + spin_lock(&le->wait.lock); > if (!kfifo_is_empty(&le->events)) > events = EPOLLIN | EPOLLRDNORM; > + spin_unlock(&le->wait.lock); Sound like a candidate to have kfifo_is_empty_spinlocked(). > struct lineevent_state *le = filep->private_data; > - unsigned int copied; > + struct gpioevent_data event; > int ret; > + if (count < sizeof(event)) > return -EINVAL; This still has an issue with compatible syscalls. See patch I have sent recently. I dunno how you see is the better way: a) apply mine and rebase your series, or b) otherwise. I can do b) if you think it shouldn't be backported. Btw, either way we have a benifits for the following one (I see you drop kfifo_to_user() and add event variable on stack). > + return sizeof(event); Also see comments in my patch regarding the event handling.
śr., 4 gru 2019 o 23:25 Andy Shevchenko <andy.shevchenko@gmail.com> napisał(a): > > On Wed, Dec 4, 2019 at 6:01 PM Bartosz Golaszewski <brgl@bgdev.pl> wrote: > > > > From: Bartosz Golaszewski <bgolaszewski@baylibre.com> > > > > The read_lock mutex is supposed to prevent collisions between reading > > and writing to the line event kfifo but it's actually only taken when > > the events are being read from it. > > > > Drop the mutex entirely and reuse the spinlock made available to us in > > the waitqueue struct. Take the lock whenever the fifo is modified or > > inspected. Drop the call to kfifo_to_user() and instead first extract > > the new element from kfifo when the lock is taken and only then pass > > it on to the user after the spinlock is released. > > > > My comments below. > > > + spin_lock(&le->wait.lock); > > if (!kfifo_is_empty(&le->events)) > > events = EPOLLIN | EPOLLRDNORM; > > + spin_unlock(&le->wait.lock); > > Sound like a candidate to have kfifo_is_empty_spinlocked(). Yeah, I noticed but I thought I'd just add it later separately - it's always easier to merge self-contained series. > > > > struct lineevent_state *le = filep->private_data; > > - unsigned int copied; > > + struct gpioevent_data event; > > int ret; > > > + if (count < sizeof(event)) > > return -EINVAL; > > This still has an issue with compatible syscalls. See patch I have > sent recently. > I dunno how you see is the better way: a) apply mine and rebase your > series, or b) otherwise. > I can do b) if you think it shouldn't be backported. > Looking at your patch it seems to me it's best to rebase yours on top of this one - where I simply do copy_to_user() we can add a special case for 32-bit user-space. I can try to do this myself for v3 if you agree. Bart > Btw, either way we have a benifits for the following one (I see you > drop kfifo_to_user() and add event variable on stack). > > > + return sizeof(event); > > Also see comments in my patch regarding the event handling. > > -- > With Best Regards, > Andy Shevchenko
On Thu, Dec 5, 2019 at 11:31 AM Bartosz Golaszewski <bgolaszewski@baylibre.com> wrote: > śr., 4 gru 2019 o 23:25 Andy Shevchenko <andy.shevchenko@gmail.com> napisał(a): > > On Wed, Dec 4, 2019 at 6:01 PM Bartosz Golaszewski <brgl@bgdev.pl> wrote: > > > + spin_lock(&le->wait.lock); > > > if (!kfifo_is_empty(&le->events)) > > > events = EPOLLIN | EPOLLRDNORM; > > > + spin_unlock(&le->wait.lock); > > > > Sound like a candidate to have kfifo_is_empty_spinlocked(). > > Yeah, I noticed but I thought I'd just add it later separately - it's > always easier to merge self-contained series. ...and easier to forget about. But it's up to you :-) > > > struct lineevent_state *le = filep->private_data; > > > - unsigned int copied; > > > + struct gpioevent_data event; > > > int ret; > > > > > + if (count < sizeof(event)) > > > return -EINVAL; > > > > This still has an issue with compatible syscalls. See patch I have > > sent recently. > > I dunno how you see is the better way: a) apply mine and rebase your > > series, or b) otherwise. > > I can do b) if you think it shouldn't be backported. > > > > Looking at your patch it seems to me it's best to rebase yours on top > of this one - where I simply do copy_to_user() we can add a special > case for 32-bit user-space. I can try to do this myself for v3 if you > agree. Yea, I'm fine with it.
diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c index b7043946c029..43f90eca6d45 100644 --- a/drivers/gpio/gpiolib.c +++ b/drivers/gpio/gpiolib.c @@ -788,8 +788,6 @@ static int linehandle_create(struct gpio_device *gdev, void __user *ip) * @irq: the interrupt that trigger in response to events on this GPIO * @wait: wait queue that handles blocking reads of events * @events: KFIFO for the GPIO events - * @read_lock: mutex lock to protect reads from colliding with adding - * new events to the FIFO * @timestamp: cache for the timestamp storing it between hardirq * and IRQ thread, used to bring the timestamp close to the actual * event @@ -802,7 +800,6 @@ struct lineevent_state { int irq; wait_queue_head_t wait; DECLARE_KFIFO(events, struct gpioevent_data, 16); - struct mutex read_lock; u64 timestamp; }; @@ -818,8 +815,10 @@ static __poll_t lineevent_poll(struct file *filep, poll_wait(filep, &le->wait, wait); + spin_lock(&le->wait.lock); if (!kfifo_is_empty(&le->events)) events = EPOLLIN | EPOLLRDNORM; + spin_unlock(&le->wait.lock); return events; } @@ -831,43 +830,47 @@ static ssize_t lineevent_read(struct file *filep, loff_t *f_ps) { struct lineevent_state *le = filep->private_data; - unsigned int copied; + struct gpioevent_data event; int ret; - if (count < sizeof(struct gpioevent_data)) + if (count < sizeof(event)) return -EINVAL; - do { + for (;;) { + spin_lock(&le->wait.lock); if (kfifo_is_empty(&le->events)) { - if (filep->f_flags & O_NONBLOCK) + if (filep->f_flags & O_NONBLOCK) { + spin_unlock(&le->wait.lock); return -EAGAIN; + } - ret = wait_event_interruptible(le->wait, + ret = wait_event_interruptible_locked(le->wait, !kfifo_is_empty(&le->events)); - if (ret) + if (ret) { + spin_unlock(&le->wait.lock); return ret; - } + } - if (mutex_lock_interruptible(&le->read_lock)) - return -ERESTARTSYS; - ret = kfifo_to_user(&le->events, buf, count, &copied); - mutex_unlock(&le->read_lock); + } - if (ret) - return ret; + ret = kfifo_out(&le->events, &event, 1); + spin_unlock(&le->wait.lock); + if (ret == 1) + break; /* - * If we couldn't read anything from the fifo (a different - * thread might have been faster) we either return -EAGAIN if - * the file descriptor is non-blocking, otherwise we go back to - * sleep and wait for more data to arrive. + * We should never get here since we're holding the lock from + * the moment we noticed a new event until calling kfifo_out() + * but we must check the return value. On the off-chance - just + * go back to waiting. */ - if (copied == 0 && (filep->f_flags & O_NONBLOCK)) - return -EAGAIN; + } - } while (copied == 0); + ret = copy_to_user(buf, &event, sizeof(event)); + if (ret) + return -EFAULT; - return copied; + return sizeof(event); } static int lineevent_release(struct inode *inode, struct file *filep) @@ -969,7 +972,7 @@ static irqreturn_t lineevent_irq_thread(int irq, void *p) return IRQ_NONE; } - ret = kfifo_put(&le->events, ge); + ret = kfifo_in_spinlocked(&le->events, &ge, 1, &le->wait.lock); if (ret) wake_up_poll(&le->wait, EPOLLIN); @@ -1084,7 +1087,6 @@ static int lineevent_create(struct gpio_device *gdev, void __user *ip) INIT_KFIFO(le->events); init_waitqueue_head(&le->wait); - mutex_init(&le->read_lock); /* Request a thread to read the events */ ret = request_threaded_irq(le->irq,