diff --git a/README.md b/README.md
index 3de42bd..2f1fb1f 100644
--- a/README.md
+++ b/README.md
@@ -1,72 +1,63 @@
# pyinotifyd
-A daemon to monitore filesystems events with inotify on Linux and execute tasks, which can be Python functions or shell commands. It is build on top of the pyinotify library.
+A daemon to monitore filesystems events with inotify on Linux and execute tasks (Python methods or Shell commands) with an optional delay. It is also possible to cancel delayed tasks.
## Requirements
* [pyinotify](https://github.com/seb-m/pyinotify)
## Installation
-* Install pyinotifyd with pip.
```sh
pip install pyinotifyd
```
-* Modify /etc/pyinotifyd/config.py according to your needs.
# Configuration
-The config file is written in Python syntax. pyinotifyd reads and executes its content, which means you can add custom Python code to the config file.
+The config file **/etc/pyinotifyd/config.py** is written in Python syntax. pyinotifyd reads and executes its content, that means you can add your custom Python code to the config file.
-To pass config options to pyinotifyd, define a dictionary named **pyinotifyd_config**.
-This is the default:
+## Tasks
+Tasks are Python methods that are called in case an event occurs.
+This is a very simple example task that just logs the task_id and the event:
```python
-pyinotifyd_config = {
- # List of watches, see description below
- "watches": [],
-
- # Loglevel (see https://docs.python.org/3/library/logging.html#levels)
- "loglevel": logging.INFO,
-
- # Timeout to wait for pending tasks to complete during shutdown
- "shutdown_timeout": 30
-}
+async def custom_task(event, task_id):
+ logging.info(f"{task_id}: execute example task: {event}")
```
+This task can be directly bound to an event in an event map. Although this is the easiest and quickest way, it is usually better to use a scheduler to wrap the task.
## Schedulers
-pyinotifyd comes with different schedulers to schedule tasks with an optional delay. The advantages of using a scheduler are consistent logging and the possibility to cancel delayed tasks.
+pyinotifyd has different schedulers to schedule tasks with an optional delay. The advantages of using a scheduler are consistent logging and the possibility to cancel delayed tasks. Furthermore, schedulers have the ability to differentiate between files and directories.
### TaskScheduler
-This scheduler is used to run Python functions.
-
-
-class **TaskScheduler**(*job, delay=0, files=True, dirs=False, logname="TaskScheduler"*)
-
-Return a **TaskScheduler** object configured to call the Python function *job* with a delay of *delay* seconds. Use *files* and *dirs* to define if *job* is called for events on files and/or directories. Log messages with *logname*.
+Scheduler to schedule *task* with an optional *delay* in seconds. Use the *files* and *dirs* arguments to schedule tasks only for files and/or directories.
+The *logname* argument is used to set a custom name for log messages. All arguments except for *task* are optional.
+```python
+s = TaskScheduler(task=custom_task, files=True, dirs=False, delay=0, logname="TaskScheduler")
+```
+TaskScheduler provides two tasks which can be bound to an event in an event map.
+* **s.schedule**
+ Schedule a task. If there is already a scheduled task, it will be canceled first.
+* **s.cancel**
+ Cancel a scheduled task.
### ShellScheduler
-
-## Watches
-A Watch is defined as a dictionary.
-This is the default:
+Scheduler to schedule Shell command *cmd*. The placeholders **{maskname}**, **{pathname}** and **{src_pathname}** are replaced with the actual values of the event. ShellScheduler has the same optional arguments as TaskScheduler and provides the same tasks.
```python
-{
- # path to watch, globbing is allowed
- "path": "",
-
- # set to True to add a watch on each subdirectory
- "rec": False,
-
- # set to True to automatically add watches on newly created directories in watched parent path
- "auto_add": False,
-
- # dictionary which contains the event map, see description below
- "event_map": {}
-}
+s1 = ShellScheduler(cmd="/usr/local/bin/task.sh {maskname} {pathname} {src_pathname}")
```
-
-### Event maps
-An event map is defined as a dictionary. It is used to map different event types to Python functions. Those functions are called with the event-object a task-id as positional arguments if an event is received. It is possible to set a list of functions to run multiple tasks on a single event. If an event type is not present in the map or None is given, the event type is ignored.
+## Event map
+An event map is used to map event types to tasks. It is possible to set a list of tasks to run multiple tasks on a single event. If the task of an event type is set to None, it is ignored.
This is an example:
```python
-{
- "IN_CLOSE_NOWRITE": [s1.schedule, s2.schedule],
- "IN_CLOSE_WRITE": s1.schedule
-}
+event_map = EventMap({"IN_CLOSE_NOWRITE": [s.schedule, s1.schedule],
+ "IN_CLOSE_WRITE": s.schedule})
+```
+
+## Watches
+Watch *path* for event types in *event_map* and execute the corresponding task(s). If *rec* is True, a watch will be added on each subdirectory in *path*. If *auto_add* is True, a watch will be added automatically on newly created subdirectories in *path*.
+
+```python
+watch = Watch(path="/tmp", event_map=event_map, rec=False, auto_add=False)
+```
+
+## PyinotifydConfig
+pyinotifyd expects an instance of PyinotifydConfig named **pyinotifyd_config** that holds its config options. The options are a list of *watches*, the *loglevel* (see https://docs.python.org/3/library/logging.html#levels) and the *shutdown_timeout*. pyinotifyd will wait *shutdown_timeout* seconds for pending tasks to complete during shutdown.
+```python
+pyinotifyd_config = PyinotifydConfig(watches=[watch], loglevel=logging.INFO, shutdown_timeout=30)
```
diff --git a/docs/.config.py.swp b/docs/.config.py.swp
index 30fd80e..c4fab21 100644
Binary files a/docs/.config.py.swp and b/docs/.config.py.swp differ
diff --git a/docs/config.py b/docs/config.py
index 4f1612d..75453e5 100644
--- a/docs/config.py
+++ b/docs/config.py
@@ -4,60 +4,56 @@
# Example usage of TaskScheduler #
####################################
-#def custom_job(event, task_id):
-# logging.info(f"{task_id}: execute task for {event}")
+#async def custom_task(event, task_id):
+# logging.info(f"{task_id}: execute example task: {event}")
#
-#s = TaskScheduler(delay=10, job=custom_job)
+#s = TaskScheduler(task=custom_task, files=True, dirs=False)
#####################################################
# Example usage of TaskScheduler with FileManager #
#####################################################
-#fm = FileManager(
-# rules=[
-# {"action": "move",
-# "src_re": r"^(?P.*)",
-# "dst_re": r"\g.processed"}
-# ]
-#)
-#
-#s = TaskScheduler(delay=10, job=fm.job)
+#rules=[{"action": "move",
+# "src_re": r"^(?P.*)",
+# "dst_re": r"\g.processed"}]
+#fm = FileManager(rules=rules, auto_create=True)
+#s = TaskScheduler(task=fm.task, delay=10, files=True, dirs=False)
#####################################
# Example usage of ShellScheduler #
#####################################
-#s = ShellScheduler(cmd="/usr/local/bin/task.sh {maskname} {pathname} {src_pathname}")
+#cmd = "/usr/local/bin/task.sh {maskname} {pathname} {src_pathname}"
+#s = ShellScheduler(cmd=cmd)
+
+
+###################
+# Example watch #
+###################
+
+#event_map = EventMap({"IN_ACCESS": None,
+# "IN_ATTRIB": None,
+# "IN_CLOSE_NOWRITE": None,
+# "IN_CLOSE_WRITE": s.schedule,
+# "IN_CREATE": None,
+# "IN_DELETE": s.cancel,
+# "IN_DELETE_SELF": s.cancel,
+# "IN_IGNORED": None,
+# "IN_MODIFY": s.cancel,
+# "IN_MOVE_SELF": None,
+# "IN_MOVED_FROM": s.cancel,
+# "IN_MOVED_TO": s.schedule,
+# "IN_OPEN": None,
+# "IN_Q_OVERFLOW": None,
+# "IN_UNMOUNT": s.cancel})
+#watch = Watch(path="/tmp", event_map=event_map, rec=True, auto_add=True)
###############################
# Example pyinotifyd config #
###############################
-#pyinotifyd_config = {
-# "watches": [
-# {"path": "/tmp",
-# "rec": True,
-# "auto_add": True,
-# "event_map": {
-# "IN_ACCESS": None,
-# "IN_ATTRIB": None,
-# "IN_CLOSE_NOWRITE": None,
-# "IN_CLOSE_WRITE": s.schedule,
-# "IN_CREATE": None,
-# "IN_DELETE": s.cancel,
-# "IN_DELETE_SELF": s.cancel,
-# "IN_IGNORED": None,
-# "IN_MODIFY": s.cancel,
-# "IN_MOVE_SELF": None,
-# "IN_MOVED_FROM": s.cancel,
-# "IN_MOVED_TO": s.schedule,
-# "IN_OPEN": None,
-# "IN_Q_OVERFLOW": None,
-# "IN_UNMOUNT": s.cancel}
-# }
-# ],
-# "loglevel": logging.INFO,
-# "shutdown_timeout": 15}
+#pyinotifyd_config = PyinotifydConfig(
+# watches=[watch], loglevel=logging.INFO, shutdown_timeout=30)
diff --git a/pyinotifyd.py b/pyinotifyd.py
index 523e909..c35c3e5 100755
--- a/pyinotifyd.py
+++ b/pyinotifyd.py
@@ -29,16 +29,17 @@ from uuid import uuid4
__version__ = "0.0.1"
+
class Task:
- def __init__(self, event, delay, task_id, job, callback=None,
+ def __init__(self, event, delay, task_id, task, callback=None,
logname="Task"):
self._event = event
self._path = event.pathname
self._delay = delay
self.task_id = task_id
- self._task = None
- self._job = job
+ self._job = task
self._callback = callback
+ self._task = None
self._log = logging.getLogger(logname)
async def _start(self):
@@ -66,16 +67,38 @@ class Task:
self.start()
+class TaskList:
+ def __init__(self, tasks=[]):
+ if not isinstance(tasks, list):
+ tasks = [tasks]
+
+ self._tasks = tasks
+
+ def add(self, task):
+ self._tasks.append(task)
+
+ def remove(self, task):
+ self._tasks.remove(task)
+
+ def execute(self, event):
+ for task in self._tasks:
+ task(event)
+
+
class TaskScheduler:
- def __init__(self, job, delay=0, files=True, dirs=False,
+ def __init__(self, task, files, dirs, delay=0,
logname="TaskScheduler"):
- assert callable(job), f"job: expected callable, got {type(job)}"
- self._job = job
- assert isinstance(delay, int), f"delay: expected {type(int)}, got {type(delay)}"
+ assert callable(task), \
+ f"task: expected callable, got {type(task)}"
+ self._task = task
+ assert isinstance(delay, int), \
+ f"delay: expected {type(int)}, got {type(delay)}"
self._delay = delay
- assert isinstance(files, bool), f"files: expected {type(bool)}, got {type(files)}"
+ assert isinstance(files, bool), \
+ f"files: expected {type(bool)}, got {type(files)}"
self._files = files
- assert isinstance(dirs, bool), f"dirs: expected {type(bool)}, got {type(dirs)}"
+ assert isinstance(dirs, bool), \
+ f"dirs: expected {type(bool)}, got {type(dirs)}"
self._dirs = dirs
self._tasks = {}
self._log = logging.getLogger(logname)
@@ -105,7 +128,7 @@ class TaskScheduler:
f"received event {maskname} on '{path}', "
f"schedule task {task_id} (delay={self._delay}s)")
task = Task(
- event, self._delay, task_id=task_id, job=self._job,
+ event, self._delay, task_id, self._task,
callback=self._task_started)
self._tasks[path] = task
task.start()
@@ -123,13 +146,14 @@ class TaskScheduler:
class ShellScheduler(TaskScheduler):
- def __init__(self, cmd, job=None, logname="ShellScheduler",
+ def __init__(self, cmd, task=None, logname="ShellScheduler",
*args, **kwargs):
- assert isinstance(cmd, str), f"cmd: expected {type('')}, got {type(cmd)}"
+ assert isinstance(cmd, str), \
+ f"cmd: expected {type('')}, got {type(cmd)}"
self._cmd = cmd
- super().__init__(*args, job=self.job, logname=logname, **kwargs)
+ super().__init__(*args, task=self.task, logname=logname, **kwargs)
- async def job(self, event, task_id):
+ async def task(self, event, task_id):
maskname = event.maskname.split("|", 1)[0]
cmd = self._cmd
cmd = cmd.replace("{maskname}", shell_quote(maskname))
@@ -146,6 +170,74 @@ class ShellScheduler(TaskScheduler):
await proc.communicate()
+class EventMap:
+ flags = {**pyinotify.EventsCodes.OP_FLAGS,
+ **pyinotify.EventsCodes.EVENT_FLAGS}
+
+ def __init__(self, event_map=None):
+ self._map = {}
+ if event_map is not None:
+ assert isinstance(event_map, dict), \
+ f"event_map: expected {type(dict)}, got {type(event_map)}"
+ for flag, func in event_map.items():
+ self.set(flag, func)
+
+ def get(self):
+ return self._map
+
+ def set(self, flag, values):
+ assert flag in EventMap.flags, \
+ f"event_map: invalid flag: {flag}"
+ if values is None:
+ if flag in self._map:
+ del self._map[flag]
+ else:
+ if not isinstance(values, list):
+ values = [values]
+
+ for value in values:
+ assert callable(value), \
+ f"event_map: {flag}: expected callable, got {type(value)}"
+
+ self._map[flag] = values
+
+
+class Watch:
+ def __init__(self, path, event_map, rec=False, auto_add=False):
+ assert isinstance(path, str), \
+ f"path: expected {type('')}, got {type(path)}"
+ self.path = path
+ if isinstance(event_map, EventMap):
+ self.event_map = event_map
+ elif isinstance(event_map, dict):
+ self.event_map = EventMap(event_map)
+ else:
+ raise AssertionError(
+ f"event_map: expected {type(EventMap)} or {type(dict)}, "
+ f"got {type(event_map)}")
+
+ assert isinstance(rec, bool), \
+ f"rec: expected {type(bool)}, got {type(rec)}"
+ self.rec = rec
+ assert isinstance(auto_add, bool), \
+ f"auto_add: expected {type(bool)}, got {type(auto_add)}"
+ self.auto_add = auto_add
+
+ def event_notifier(self, wm, loop):
+ handler = pyinotify.ProcessEvent()
+ mask = False
+ for flag, values in self.event_map.get().items():
+ setattr(handler, f"process_{flag}", TaskList(values).execute)
+ if not mask:
+ mask = EventMap.flags[flag]
+ else:
+ mask = mask | EventMap.flags[flag]
+
+ wm.add_watch(self.path, mask, rec=self.rec, auto_add=self.auto_add,
+ do_glob=True)
+ return pyinotify.AsyncioNotifier(wm, loop, default_proc_fun=handler)
+
+
class FileManager:
def __init__(self, rules, auto_create=True, rec=False,
logname="FileManager"):
@@ -168,7 +260,7 @@ class FileManager:
self._rec = rec
self._log = logging.getLogger(logname)
- async def job(self, event, task_id):
+ async def task(self, event, task_id):
path = event.pathname
match = None
for rule in self._rules:
@@ -223,26 +315,30 @@ class FileManager:
self._log.warning(f"{task_id}: no rule matches path '{path}'")
-class ExecList:
- def __init__(self):
- self._list = []
+class PyinotifydConfig:
+ def __init__(self, watches=[], loglevel=logging.INFO, shutdown_timeout=30):
+ if not isinstance(watches, list):
+ watches = [watches]
- def add(self, func):
- self._list.append(func)
+ self.set_watches(watches)
- def remove(self, func):
- self._list.remove(func)
+ assert isinstance(loglevel, int), \
+ f"loglevel: expected {type(int)}, got {type(loglevel)}"
+ self.loglevel = loglevel
- def run(self, event):
- for func in self._list:
- func(event)
+ assert isinstance(shutdown_timeout, int), \
+ f"shutdown_timeout: expected {type(int)}, " \
+ f"got {type(shutdown_timeout)}"
+ self.shutdown_timeout = shutdown_timeout
+ def add_watch(self, *args, **kwargs):
+ self.watches.append(Watch(*args, **kwargs))
-def add_mask(new_mask, current_mask=False):
- if not current_mask:
- return new_mask
- else:
- return current_mask | new_mask
+ def set_watches(self, watches):
+ self.watches = []
+ for watch in watches:
+ assert isinstance(watch, Watch), \
+ f"watches: expected {type(Watch)}, got {type(watch)}"
async def shutdown(timeout=30):
@@ -290,18 +386,16 @@ def main():
print(f"pyinotifyd ({__version__})")
sys.exit(0)
- cfg = {"watches": [],
- "loglevel": logging.INFO,
- "shutdown_timeout": 30}
-
try:
- cfg_vars = {"pyinotifyd_config": cfg}
+ cfg = {}
with open(args.config, "r") as c:
- exec(c.read(), globals(), cfg_vars)
-
- cfg.update(cfg_vars["pyinotifyd_config"])
+ exec(c.read(), globals(), cfg)
+ cfg = cfg["pyinotifyd_config"]
+ assert isinstance(cfg, PyinotifydConfig), \
+ f"pyinotifyd_config: expected {type(PyinotifydConfig)}, " \
+ f"got {type(cfg)}"
except Exception as e:
- print(f"error in config file: {e}")
+ logging.exception(f"error in config file: {e}")
sys.exit(1)
console = logging.StreamHandler()
@@ -310,52 +404,20 @@ def main():
console.setFormatter(formatter)
if args.debug:
- cfg["loglevel"] = logging.DEBUG
+ loglevel = logging.DEBUG
+ else:
+ loglevel = cfg.loglevel
root_logger = logging.getLogger()
- root_logger.setLevel(cfg["loglevel"])
+ root_logger.setLevel(loglevel)
root_logger.addHandler(console)
- watchable_flags = pyinotify.EventsCodes.OP_FLAGS
- watchable_flags.update(pyinotify.EventsCodes.EVENT_FLAGS)
-
wm = pyinotify.WatchManager()
loop = asyncio.get_event_loop()
notifiers = []
- for watchcfg in cfg["watches"]:
- watch = {"path": "",
- "rec": False,
- "auto_add": False,
- "event_map": {}}
- watch.update(watchcfg)
- if not watch["path"]:
- continue
-
- mask = False
- handler = pyinotify.ProcessEvent()
- for flag, values in watch["event_map"].items():
- if flag not in watchable_flags or values is None:
- continue
-
- if not isinstance(values, list):
- values = [values]
-
- mask = add_mask(pyinotify.EventsCodes.ALL_FLAGS[flag], mask)
- exec_list = ExecList()
- for value in values:
- assert callable(value), \
- f"event_map['{flag}']: expected callable, " \
- f"got {type(value)}"
- exec_list.add(value)
-
- setattr(handler, f"process_{flag}", exec_list.run)
-
- logging.info(f"start watching {watch['path']}")
- wm.add_watch(
- watch["path"], mask, rec=watch["rec"], auto_add=watch["auto_add"],
- do_glob=True)
- notifiers.append(pyinotify.AsyncioNotifier(
- wm, loop, default_proc_fun=handler))
+ for watch in cfg.watches:
+ logging.info(f"start watching '{watch.path}'")
+ notifiers.append(watch.event_notifier(wm, loop))
try:
loop.run_forever()
@@ -365,7 +427,7 @@ def main():
for notifier in notifiers:
notifier.stop()
- loop.run_until_complete(shutdown(timeout=cfg["shutdown_timeout"]))
+ loop.run_until_complete(shutdown(timeout=cfg.shutdown_timeout))
loop.close()
sys.exit(0)