Linux下的epoll相关知识学习


=Start=

缘由:

学习、提高需要

正文:

参考解答:
epoll编程接口介绍

epoll在编程中有三个接口,分别如下:

int epoll_create(int size);
int epoll_ctl(int epfd, int op, int fd, struct epoll_event *event);
int epoll_wait(int epfd, struct epoll_event *events, int maxevents, int timeout);

1. int epoll_create(int size);

创建一个epoll的句柄,size用来告诉内核这个监听的数目一共有多大,这个参数不同于select()中的第一个参数,给出最大监听的fd+1的值,参数size并不是限制了epoll所能监听的描述符最大个数,只是对内核初始分配内部数据结构的一个建议当创建好epoll句柄后,它就会占用一个fd值,在Linux下如果查看/proc/$pid/fd/,是能够看到这个fd的,所以在使用完epoll后,必须调用close()关闭,否则可能导致fd被耗尽。

2. int epoll_ctl(int epfd, int op, int fd, struct epoll_event *event);

函数是对指定描述符fd执行op操作。
– epfd:是epoll_create()的返回值。
– op:表示添加/删除/修改操作,分别用三个宏来表示:添加EPOLL_CTL_ADD,删除EPOLL_CTL_DEL,修改EPOLL_CTL_MOD。表示添加、删除和修改对fd的监听事件。
– fd:是需要监听的fd(文件描述符)
– epoll_event:是告诉内核需要监听什么事,struct epoll_event结构如下:

struct epoll_event {
  __uint32_t events;  /* Epoll events */
  epoll_data_t data;  /* User data variable */
};
//events可以是以下几个宏的集合:
EPOLLIN :表示对应的文件描述符可以读(包括对端socket正常关闭);
EPOLLOUT:表示对应的文件描述符可以写;
EPOLLPRI:表示对应的文件描述符有紧急的数据可读(这里应该表示有带外数据到来);
EPOLLERR:表示对应的文件描述符发生错误;
EPOLLHUP:表示对应的文件描述符被挂断;
EPOLLET: 将EPOLL设为边缘触发(Edge Triggered)模式,这是相对于默认的水平触发(Level Triggered)模式来说的。
EPOLLONESHOT:只监听一次事件,当监听完这次事件之后,如果还需要继续监听这个socket的话,需要再次把这个socket加入到EPOLL队列里

3. int epoll_wait(int epfd, struct epoll_event * events, int maxevents, int timeout);

等待epfd上的io事件,最多返回maxevents个事件。

参数events用来从内核得到事件的集合,maxevents告之内核这个events有多大,这个maxevents的值不能大于创建epoll_create()时的size,参数timeout是超时时间(毫秒,0会立即返回,-1将不确定,也有说法说是永久阻塞)。该函数返回需要处理的事件数目,如返回0表示已超时。


#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netdb.h>
#include <unistd.h>
#include <fcntl.h>
#include <sys/epoll.h>
#include <errno.h>
#define MAXEVENTS 64
static int make_socket_non_blocking (int sfd)
{
  int flags, s;
  flags = fcntl (sfd, F_GETFL, 0);
  if (flags == -1)
    {
      perror ("fcntl");
      return -1;
    }
  flags |= O_NONBLOCK;
  s = fcntl (sfd, F_SETFL, flags);
  if (s == -1)
    {
      perror ("fcntl");
      return -1;
    }
  return 0;
}
static int create_and_bind (char *port)
{
  struct addrinfo hints;
  struct addrinfo *result, *rp;
  int s, sfd;
  memset (&hints, 0, sizeof (struct addrinfo));
  hints.ai_family = AF_UNSPEC;     /* Return IPv4 and IPv6 choices */
  hints.ai_socktype = SOCK_STREAM; /* We want a TCP socket */
  hints.ai_flags = AI_PASSIVE;     /* All interfaces */
  s = getaddrinfo (NULL, port, &hints, &result);
  if (s != 0)
    {
      fprintf (stderr, "getaddrinfo: %s\n", gai_strerror (s));
      return -1;
    }
  for (rp = result; rp != NULL; rp = rp->ai_next)
    {
      sfd = socket (rp->ai_family, rp->ai_socktype, rp->ai_protocol);
      if (sfd == -1)
        continue;
      s = bind (sfd, rp->ai_addr, rp->ai_addrlen);
      if (s == 0)
        {
          /* We managed to bind successfully! */
          break;
        }
      close (sfd);
    }
  if (rp == NULL)
    {
      fprintf (stderr, "Could not bind\n");
      return -1;
    }
  freeaddrinfo (result);
  return sfd;
}
int main (int argc, char *argv[])
{
  int sfd, s;
  int efd;
  struct epoll_event event;
  struct epoll_event *events;
  if (argc != 2)
    {
      fprintf (stderr, "Usage: %s [port]\n", argv[0]);
      exit (EXIT_FAILURE);
    }
  sfd = create_and_bind (argv[1]);
  if (sfd == -1)
    abort ();
  s = make_socket_non_blocking (sfd);
  if (s == -1)
    abort ();
  s = listen (sfd, SOMAXCONN);
  if (s == -1)
    {
      perror ("listen");
      abort ();
    }
  efd = epoll_create1 (0);
  if (efd == -1)
    {
      perror ("epoll_create");
      abort ();
    }
  event.data.fd = sfd;
  event.events = EPOLLIN | EPOLLET;
  s = epoll_ctl (efd, EPOLL_CTL_ADD, sfd, &event);
  if (s == -1)
    {
      perror ("epoll_ctl");
      abort ();
    }
  /* Buffer where events are returned */
  events = calloc (MAXEVENTS, sizeof event);
  /* The event loop */
  while (1)
    {
      int n, i;
      n = epoll_wait (efd, events, MAXEVENTS, -1);
      for (i = 0; i < n; i++)
    {
      if ((events[i].events & EPOLLERR) ||
              (events[i].events & EPOLLHUP) ||
              (!(events[i].events & EPOLLIN)))
        {
              /* An error has occured on this fd, or the socket is not
                 ready for reading (why were we notified then?) */
          fprintf (stderr, "epoll error\n");
          close (events[i].data.fd);
          continue;
        }
      else if (sfd == events[i].data.fd)
        {
              /* We have a notification on the listening socket, which
                 means one or more incoming connections. */
              while (1)
                {
                  struct sockaddr in_addr;
                  socklen_t in_len;
                  int infd;
                  char hbuf[NI_MAXHOST], sbuf[NI_MAXSERV];
                  in_len = sizeof in_addr;
                  infd = accept (sfd, &in_addr, &in_len);
                  if (infd == -1)
                    {
                      if ((errno == EAGAIN) ||
                          (errno == EWOULDBLOCK))
                        {
                          /* We have processed all incoming
                             connections. */
                          break;
                        }
                      else
                        {
                          perror ("accept");
                          break;
                        }
                    }
                  s = getnameinfo (&in_addr, in_len,
                                   hbuf, sizeof hbuf,
                                   sbuf, sizeof sbuf,
                                   NI_NUMERICHOST | NI_NUMERICSERV);
                  if (s == 0)
                    {
                      printf("Accepted connection on descriptor %d "
                             "(host=%s, port=%s)\n", infd, hbuf, sbuf);
                    }
                  /* Make the incoming socket non-blocking and add it to the
                     list of fds to monitor. */
                  s = make_socket_non_blocking (infd);
                  if (s == -1)
                    abort ();
                  event.data.fd = infd;
                  event.events = EPOLLIN | EPOLLET;
                  s = epoll_ctl (efd, EPOLL_CTL_ADD, infd, &event);
                  if (s == -1)
                    {
                      perror ("epoll_ctl");
                      abort ();
                    }
                }
              continue;
            }
          else
            {
              /* We have data on the fd waiting to be read. Read and
                 display it. We must read whatever data is available
                 completely, as we are running in edge-triggered mode
                 and won't get a notification again for the same
                 data. */
              int done = 0;
              while (1)
                {
                  ssize_t count;
                  char buf[512];
                  count = read (events[i].data.fd, buf, sizeof buf);
                  if (count == -1)
                    {
                      /* If errno == EAGAIN, that means we have read all
                         data. So go back to the main loop. */
                      if (errno != EAGAIN)
                        {
                          perror ("read");
                          done = 1;
                        }
                      break;
                    }
                  else if (count == 0)
                    {
                      /* End of file. The remote has closed the
                         connection. */
                      done = 1;
                      break;
                    }
                  /* Write the buffer to standard output */
                  s = write (1, buf, count);
                  if (s == -1)
                    {
                      perror ("write");
                      abort ();
                    }
                }
              if (done)
                {
                  printf ("Closed connection on descriptor %d\n",
                          events[i].data.fd);
                  /* Closing the descriptor will make epoll remove it
                     from the set of descriptors which are monitored. */
                  close (events[i].data.fd);
                }
            }
        }
    }
  free (events);
  close (sfd);
  return EXIT_SUCCESS;
}
参考链接:

https://banu.com/blog/2/how-to-use-epoll-a-complete-example-in-c/

epoll 使用详解
http://www.cnblogs.com/haippy/archive/2012/01/09/2317269.html

linux 网络编程:epoll 的实例
http://blog.csdn.net/shanshanpt/article/details/7383400

如何使用epoll?一个完整的C例子
http://www.yeolar.com/note/2012/07/02/epoll-example/

通过完整示例来理解如何使用 epoll
http://blog.jobbole.com/93566/

我读过的最好的epoll讲解–转自“知乎”
http://yaocoder.blog.51cto.com/2668309/888374
https://www.zhihu.com/question/20122137/answer/14049112

http://man7.org/linux/man-pages/man7/epoll.7.html
http://man7.org/linux/man-pages/man2/epoll_create.2.html
http://man7.org/linux/man-pages/man2/epoll_ctl.2.html
http://man7.org/linux/man-pages/man2/epoll_wait.2.html

=END=

, ,

《 “Linux下的epoll相关知识学习” 》 有 4 条评论

  1. Linux IO模式及 select、poll、epoll详解
    https://segmentfault.com/a/1190000003063859
    http://blog.taohuawu.club/article/linux-io-select-poll-epoll
    `
    1 一 概念说明
    1.1 用户空间与内核空间
    1.2 进程切换
    1.3 进程的阻塞
    1.4 文件描述符fd
    1.5 缓存 I/O
    2 二 IO模式
    2.1 阻塞 I/O(blocking IO)
    2.2 非阻塞 I/O(nonblocking IO)
    2.3 I/O 多路复用( IO multiplexing)
    2.4 异步 I/O(asynchronous IO)
    2.5 总结
    2.5.1 blocking和non-blocking的区别
    2.5.2 synchronous IO和asynchronous IO的区别
    3 三 I/O 多路复用之select、poll、epoll详解
    3.1 select
    3.2 poll
    3.3 epoll
    3.3.1 一 epoll操作过程
    3.3.2 二 工作模式
    3.3.2.1 1. LT模式
    3.3.2.2 2. ET模式
    3.3.2.3 3. 总结
    3.3.3 三 代码演示
    3.3.4 四 epoll总结
    4 参考
    `

  2. 深入理解IO复用之epoll
    https://mp.weixin.qq.com/s/yA-FgBSDDq7c93wz4DFLkg
    `
    通过本篇文章将了解到以下内容:
    I/O复用的定义和产生背景
    Linux系统的I/O复用工具
    epoll设计的基本构成
    epoll高性能的底层实现
    epoll的ET模式和LT模式

    # 复用的概念
    复用技术(multiplexing)并不是新技术而是一种设计思想,在通信和硬件设计中存在频分复用、时分复用、波分复用、码分复用等,在日常生活中复用的场景也非常多,因此不要被专业术语所迷惑。
    从本质上来说,复用就是为了解决有限资源和过多使用者的不平衡问题,且此技术的理论基础是资源的可释放性。

    # 理解IO复用
    I/O的含义:在计算机领域常说的IO包括磁盘IO和网络IO,我们所说的IO复用主要是指网络IO,在Linux中一切皆文件,因此网络IO也经常用文件描述符FD来表示。

    复用的含义:那么这些文件描述符FD要复用什么呢?在网络场景中复用的就是任务处理线程,所以简单理解就是多个IO共用1个线程。

    IO复用的可行性:IO请求的基本操作包括read和write,由于网络交互的本质性,必然存在等待,换言之就是整个网络连接中FD的读写是交替出现的,时而可读可写,时而空闲,所以IO复用是可用实现的。

    综上认为,IO复用技术就是协调多个可释放资源的FD交替共享任务处理线程完成通信任务,实现多个fd对应1个任务处理线程。
    `

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注