change config object structure and README.md
This commit is contained in:
85
README.md
85
README.md
@@ -1,72 +1,63 @@
|
|||||||
# pyinotifyd
|
# pyinotifyd
|
||||||
A daemon to monitore filesystems events with inotify on Linux and execute tasks, which can be Python functions or shell commands. It is build on top of the pyinotify library.
|
A daemon to monitore filesystems events with inotify on Linux and execute tasks (Python methods or Shell commands) with an optional delay. It is also possible to cancel delayed tasks.
|
||||||
|
|
||||||
## Requirements
|
## Requirements
|
||||||
* [pyinotify](https://github.com/seb-m/pyinotify)
|
* [pyinotify](https://github.com/seb-m/pyinotify)
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
* Install pyinotifyd with pip.
|
|
||||||
```sh
|
```sh
|
||||||
pip install pyinotifyd
|
pip install pyinotifyd
|
||||||
```
|
```
|
||||||
* Modify /etc/pyinotifyd/config.py according to your needs.
|
|
||||||
|
|
||||||
# Configuration
|
# Configuration
|
||||||
The config file is written in Python syntax. pyinotifyd reads and executes its content, which means you can add custom Python code to the config file.
|
The config file **/etc/pyinotifyd/config.py** is written in Python syntax. pyinotifyd reads and executes its content, that means you can add your custom Python code to the config file.
|
||||||
|
|
||||||
To pass config options to pyinotifyd, define a dictionary named **pyinotifyd_config**.
|
## Tasks
|
||||||
This is the default:
|
Tasks are Python methods that are called in case an event occurs.
|
||||||
|
This is a very simple example task that just logs the task_id and the event:
|
||||||
```python
|
```python
|
||||||
pyinotifyd_config = {
|
async def custom_task(event, task_id):
|
||||||
# List of watches, see description below
|
logging.info(f"{task_id}: execute example task: {event}")
|
||||||
"watches": [],
|
|
||||||
|
|
||||||
# Loglevel (see https://docs.python.org/3/library/logging.html#levels)
|
|
||||||
"loglevel": logging.INFO,
|
|
||||||
|
|
||||||
# Timeout to wait for pending tasks to complete during shutdown
|
|
||||||
"shutdown_timeout": 30
|
|
||||||
}
|
|
||||||
```
|
```
|
||||||
|
This task can be directly bound to an event in an event map. Although this is the easiest and quickest way, it is usually better to use a scheduler to wrap the task.
|
||||||
|
|
||||||
## Schedulers
|
## Schedulers
|
||||||
pyinotifyd comes with different schedulers to schedule tasks with an optional delay. The advantages of using a scheduler are consistent logging and the possibility to cancel delayed tasks.
|
pyinotifyd has different schedulers to schedule tasks with an optional delay. The advantages of using a scheduler are consistent logging and the possibility to cancel delayed tasks. Furthermore, schedulers have the ability to differentiate between files and directories.
|
||||||
|
|
||||||
### TaskScheduler
|
### TaskScheduler
|
||||||
This scheduler is used to run Python functions.
|
Scheduler to schedule *task* with an optional *delay* in seconds. Use the *files* and *dirs* arguments to schedule tasks only for files and/or directories.
|
||||||
|
The *logname* argument is used to set a custom name for log messages. All arguments except for *task* are optional.
|
||||||
<div class="bg-yellow">
|
```python
|
||||||
class **TaskScheduler**(*job, delay=0, files=True, dirs=False, logname="TaskScheduler"*)
|
s = TaskScheduler(task=custom_task, files=True, dirs=False, delay=0, logname="TaskScheduler")
|
||||||
</div>
|
```
|
||||||
Return a **TaskScheduler** object configured to call the Python function *job* with a delay of *delay* seconds. Use *files* and *dirs* to define if *job* is called for events on files and/or directories. Log messages with *logname*.
|
TaskScheduler provides two tasks which can be bound to an event in an event map.
|
||||||
|
* **s.schedule**
|
||||||
|
Schedule a task. If there is already a scheduled task, it will be canceled first.
|
||||||
|
* **s.cancel**
|
||||||
|
Cancel a scheduled task.
|
||||||
|
|
||||||
### ShellScheduler
|
### ShellScheduler
|
||||||
|
Scheduler to schedule Shell command *cmd*. The placeholders **{maskname}**, **{pathname}** and **{src_pathname}** are replaced with the actual values of the event. ShellScheduler has the same optional arguments as TaskScheduler and provides the same tasks.
|
||||||
## Watches
|
|
||||||
A Watch is defined as a dictionary.
|
|
||||||
This is the default:
|
|
||||||
```python
|
```python
|
||||||
{
|
s1 = ShellScheduler(cmd="/usr/local/bin/task.sh {maskname} {pathname} {src_pathname}")
|
||||||
# path to watch, globbing is allowed
|
|
||||||
"path": "",
|
|
||||||
|
|
||||||
# set to True to add a watch on each subdirectory
|
|
||||||
"rec": False,
|
|
||||||
|
|
||||||
# set to True to automatically add watches on newly created directories in watched parent path
|
|
||||||
"auto_add": False,
|
|
||||||
|
|
||||||
# dictionary which contains the event map, see description below
|
|
||||||
"event_map": {}
|
|
||||||
}
|
|
||||||
```
|
```
|
||||||
|
## Event map
|
||||||
### Event maps
|
An event map is used to map event types to tasks. It is possible to set a list of tasks to run multiple tasks on a single event. If the task of an event type is set to None, it is ignored.
|
||||||
An event map is defined as a dictionary. It is used to map different event types to Python functions. Those functions are called with the event-object a task-id as positional arguments if an event is received. It is possible to set a list of functions to run multiple tasks on a single event. If an event type is not present in the map or None is given, the event type is ignored.
|
|
||||||
This is an example:
|
This is an example:
|
||||||
```python
|
```python
|
||||||
{
|
event_map = EventMap({"IN_CLOSE_NOWRITE": [s.schedule, s1.schedule],
|
||||||
"IN_CLOSE_NOWRITE": [s1.schedule, s2.schedule],
|
"IN_CLOSE_WRITE": s.schedule})
|
||||||
"IN_CLOSE_WRITE": s1.schedule
|
```
|
||||||
}
|
|
||||||
|
## Watches
|
||||||
|
Watch *path* for event types in *event_map* and execute the corresponding task(s). If *rec* is True, a watch will be added on each subdirectory in *path*. If *auto_add* is True, a watch will be added automatically on newly created subdirectories in *path*.
|
||||||
|
|
||||||
|
```python
|
||||||
|
watch = Watch(path="/tmp", event_map=event_map, rec=False, auto_add=False)
|
||||||
|
```
|
||||||
|
|
||||||
|
## PyinotifydConfig
|
||||||
|
pyinotifyd expects an instance of PyinotifydConfig named **pyinotifyd_config** that holds its config options. The options are a list of *watches*, the *loglevel* (see https://docs.python.org/3/library/logging.html#levels) and the *shutdown_timeout*. pyinotifyd will wait *shutdown_timeout* seconds for pending tasks to complete during shutdown.
|
||||||
|
```python
|
||||||
|
pyinotifyd_config = PyinotifydConfig(watches=[watch], loglevel=logging.INFO, shutdown_timeout=30)
|
||||||
```
|
```
|
||||||
|
|||||||
Binary file not shown.
@@ -4,60 +4,56 @@
|
|||||||
# Example usage of TaskScheduler #
|
# Example usage of TaskScheduler #
|
||||||
####################################
|
####################################
|
||||||
|
|
||||||
#def custom_job(event, task_id):
|
#async def custom_task(event, task_id):
|
||||||
# logging.info(f"{task_id}: execute task for {event}")
|
# logging.info(f"{task_id}: execute example task: {event}")
|
||||||
#
|
#
|
||||||
#s = TaskScheduler(delay=10, job=custom_job)
|
#s = TaskScheduler(task=custom_task, files=True, dirs=False)
|
||||||
|
|
||||||
|
|
||||||
#####################################################
|
#####################################################
|
||||||
# Example usage of TaskScheduler with FileManager #
|
# Example usage of TaskScheduler with FileManager #
|
||||||
#####################################################
|
#####################################################
|
||||||
|
|
||||||
#fm = FileManager(
|
#rules=[{"action": "move",
|
||||||
# rules=[
|
# "src_re": r"^(?P<path>.*)",
|
||||||
# {"action": "move",
|
# "dst_re": r"\g<path>.processed"}]
|
||||||
# "src_re": r"^(?P<path>.*)",
|
#fm = FileManager(rules=rules, auto_create=True)
|
||||||
# "dst_re": r"\g<path>.processed"}
|
#s = TaskScheduler(task=fm.task, delay=10, files=True, dirs=False)
|
||||||
# ]
|
|
||||||
#)
|
|
||||||
#
|
|
||||||
#s = TaskScheduler(delay=10, job=fm.job)
|
|
||||||
|
|
||||||
|
|
||||||
#####################################
|
#####################################
|
||||||
# Example usage of ShellScheduler #
|
# Example usage of ShellScheduler #
|
||||||
#####################################
|
#####################################
|
||||||
|
|
||||||
#s = ShellScheduler(cmd="/usr/local/bin/task.sh {maskname} {pathname} {src_pathname}")
|
#cmd = "/usr/local/bin/task.sh {maskname} {pathname} {src_pathname}"
|
||||||
|
#s = ShellScheduler(cmd=cmd)
|
||||||
|
|
||||||
|
|
||||||
|
###################
|
||||||
|
# Example watch #
|
||||||
|
###################
|
||||||
|
|
||||||
|
#event_map = EventMap({"IN_ACCESS": None,
|
||||||
|
# "IN_ATTRIB": None,
|
||||||
|
# "IN_CLOSE_NOWRITE": None,
|
||||||
|
# "IN_CLOSE_WRITE": s.schedule,
|
||||||
|
# "IN_CREATE": None,
|
||||||
|
# "IN_DELETE": s.cancel,
|
||||||
|
# "IN_DELETE_SELF": s.cancel,
|
||||||
|
# "IN_IGNORED": None,
|
||||||
|
# "IN_MODIFY": s.cancel,
|
||||||
|
# "IN_MOVE_SELF": None,
|
||||||
|
# "IN_MOVED_FROM": s.cancel,
|
||||||
|
# "IN_MOVED_TO": s.schedule,
|
||||||
|
# "IN_OPEN": None,
|
||||||
|
# "IN_Q_OVERFLOW": None,
|
||||||
|
# "IN_UNMOUNT": s.cancel})
|
||||||
|
#watch = Watch(path="/tmp", event_map=event_map, rec=True, auto_add=True)
|
||||||
|
|
||||||
|
|
||||||
###############################
|
###############################
|
||||||
# Example pyinotifyd config #
|
# Example pyinotifyd config #
|
||||||
###############################
|
###############################
|
||||||
|
|
||||||
#pyinotifyd_config = {
|
#pyinotifyd_config = PyinotifydConfig(
|
||||||
# "watches": [
|
# watches=[watch], loglevel=logging.INFO, shutdown_timeout=30)
|
||||||
# {"path": "/tmp",
|
|
||||||
# "rec": True,
|
|
||||||
# "auto_add": True,
|
|
||||||
# "event_map": {
|
|
||||||
# "IN_ACCESS": None,
|
|
||||||
# "IN_ATTRIB": None,
|
|
||||||
# "IN_CLOSE_NOWRITE": None,
|
|
||||||
# "IN_CLOSE_WRITE": s.schedule,
|
|
||||||
# "IN_CREATE": None,
|
|
||||||
# "IN_DELETE": s.cancel,
|
|
||||||
# "IN_DELETE_SELF": s.cancel,
|
|
||||||
# "IN_IGNORED": None,
|
|
||||||
# "IN_MODIFY": s.cancel,
|
|
||||||
# "IN_MOVE_SELF": None,
|
|
||||||
# "IN_MOVED_FROM": s.cancel,
|
|
||||||
# "IN_MOVED_TO": s.schedule,
|
|
||||||
# "IN_OPEN": None,
|
|
||||||
# "IN_Q_OVERFLOW": None,
|
|
||||||
# "IN_UNMOUNT": s.cancel}
|
|
||||||
# }
|
|
||||||
# ],
|
|
||||||
# "loglevel": logging.INFO,
|
|
||||||
# "shutdown_timeout": 15}
|
|
||||||
|
|||||||
220
pyinotifyd.py
220
pyinotifyd.py
@@ -29,16 +29,17 @@ from uuid import uuid4
|
|||||||
|
|
||||||
__version__ = "0.0.1"
|
__version__ = "0.0.1"
|
||||||
|
|
||||||
|
|
||||||
class Task:
|
class Task:
|
||||||
def __init__(self, event, delay, task_id, job, callback=None,
|
def __init__(self, event, delay, task_id, task, callback=None,
|
||||||
logname="Task"):
|
logname="Task"):
|
||||||
self._event = event
|
self._event = event
|
||||||
self._path = event.pathname
|
self._path = event.pathname
|
||||||
self._delay = delay
|
self._delay = delay
|
||||||
self.task_id = task_id
|
self.task_id = task_id
|
||||||
self._task = None
|
self._job = task
|
||||||
self._job = job
|
|
||||||
self._callback = callback
|
self._callback = callback
|
||||||
|
self._task = None
|
||||||
self._log = logging.getLogger(logname)
|
self._log = logging.getLogger(logname)
|
||||||
|
|
||||||
async def _start(self):
|
async def _start(self):
|
||||||
@@ -66,16 +67,38 @@ class Task:
|
|||||||
self.start()
|
self.start()
|
||||||
|
|
||||||
|
|
||||||
|
class TaskList:
|
||||||
|
def __init__(self, tasks=[]):
|
||||||
|
if not isinstance(tasks, list):
|
||||||
|
tasks = [tasks]
|
||||||
|
|
||||||
|
self._tasks = tasks
|
||||||
|
|
||||||
|
def add(self, task):
|
||||||
|
self._tasks.append(task)
|
||||||
|
|
||||||
|
def remove(self, task):
|
||||||
|
self._tasks.remove(task)
|
||||||
|
|
||||||
|
def execute(self, event):
|
||||||
|
for task in self._tasks:
|
||||||
|
task(event)
|
||||||
|
|
||||||
|
|
||||||
class TaskScheduler:
|
class TaskScheduler:
|
||||||
def __init__(self, job, delay=0, files=True, dirs=False,
|
def __init__(self, task, files, dirs, delay=0,
|
||||||
logname="TaskScheduler"):
|
logname="TaskScheduler"):
|
||||||
assert callable(job), f"job: expected callable, got {type(job)}"
|
assert callable(task), \
|
||||||
self._job = job
|
f"task: expected callable, got {type(task)}"
|
||||||
assert isinstance(delay, int), f"delay: expected {type(int)}, got {type(delay)}"
|
self._task = task
|
||||||
|
assert isinstance(delay, int), \
|
||||||
|
f"delay: expected {type(int)}, got {type(delay)}"
|
||||||
self._delay = delay
|
self._delay = delay
|
||||||
assert isinstance(files, bool), f"files: expected {type(bool)}, got {type(files)}"
|
assert isinstance(files, bool), \
|
||||||
|
f"files: expected {type(bool)}, got {type(files)}"
|
||||||
self._files = files
|
self._files = files
|
||||||
assert isinstance(dirs, bool), f"dirs: expected {type(bool)}, got {type(dirs)}"
|
assert isinstance(dirs, bool), \
|
||||||
|
f"dirs: expected {type(bool)}, got {type(dirs)}"
|
||||||
self._dirs = dirs
|
self._dirs = dirs
|
||||||
self._tasks = {}
|
self._tasks = {}
|
||||||
self._log = logging.getLogger(logname)
|
self._log = logging.getLogger(logname)
|
||||||
@@ -105,7 +128,7 @@ class TaskScheduler:
|
|||||||
f"received event {maskname} on '{path}', "
|
f"received event {maskname} on '{path}', "
|
||||||
f"schedule task {task_id} (delay={self._delay}s)")
|
f"schedule task {task_id} (delay={self._delay}s)")
|
||||||
task = Task(
|
task = Task(
|
||||||
event, self._delay, task_id=task_id, job=self._job,
|
event, self._delay, task_id, self._task,
|
||||||
callback=self._task_started)
|
callback=self._task_started)
|
||||||
self._tasks[path] = task
|
self._tasks[path] = task
|
||||||
task.start()
|
task.start()
|
||||||
@@ -123,13 +146,14 @@ class TaskScheduler:
|
|||||||
|
|
||||||
|
|
||||||
class ShellScheduler(TaskScheduler):
|
class ShellScheduler(TaskScheduler):
|
||||||
def __init__(self, cmd, job=None, logname="ShellScheduler",
|
def __init__(self, cmd, task=None, logname="ShellScheduler",
|
||||||
*args, **kwargs):
|
*args, **kwargs):
|
||||||
assert isinstance(cmd, str), f"cmd: expected {type('')}, got {type(cmd)}"
|
assert isinstance(cmd, str), \
|
||||||
|
f"cmd: expected {type('')}, got {type(cmd)}"
|
||||||
self._cmd = cmd
|
self._cmd = cmd
|
||||||
super().__init__(*args, job=self.job, logname=logname, **kwargs)
|
super().__init__(*args, task=self.task, logname=logname, **kwargs)
|
||||||
|
|
||||||
async def job(self, event, task_id):
|
async def task(self, event, task_id):
|
||||||
maskname = event.maskname.split("|", 1)[0]
|
maskname = event.maskname.split("|", 1)[0]
|
||||||
cmd = self._cmd
|
cmd = self._cmd
|
||||||
cmd = cmd.replace("{maskname}", shell_quote(maskname))
|
cmd = cmd.replace("{maskname}", shell_quote(maskname))
|
||||||
@@ -146,6 +170,74 @@ class ShellScheduler(TaskScheduler):
|
|||||||
await proc.communicate()
|
await proc.communicate()
|
||||||
|
|
||||||
|
|
||||||
|
class EventMap:
|
||||||
|
flags = {**pyinotify.EventsCodes.OP_FLAGS,
|
||||||
|
**pyinotify.EventsCodes.EVENT_FLAGS}
|
||||||
|
|
||||||
|
def __init__(self, event_map=None):
|
||||||
|
self._map = {}
|
||||||
|
if event_map is not None:
|
||||||
|
assert isinstance(event_map, dict), \
|
||||||
|
f"event_map: expected {type(dict)}, got {type(event_map)}"
|
||||||
|
for flag, func in event_map.items():
|
||||||
|
self.set(flag, func)
|
||||||
|
|
||||||
|
def get(self):
|
||||||
|
return self._map
|
||||||
|
|
||||||
|
def set(self, flag, values):
|
||||||
|
assert flag in EventMap.flags, \
|
||||||
|
f"event_map: invalid flag: {flag}"
|
||||||
|
if values is None:
|
||||||
|
if flag in self._map:
|
||||||
|
del self._map[flag]
|
||||||
|
else:
|
||||||
|
if not isinstance(values, list):
|
||||||
|
values = [values]
|
||||||
|
|
||||||
|
for value in values:
|
||||||
|
assert callable(value), \
|
||||||
|
f"event_map: {flag}: expected callable, got {type(value)}"
|
||||||
|
|
||||||
|
self._map[flag] = values
|
||||||
|
|
||||||
|
|
||||||
|
class Watch:
|
||||||
|
def __init__(self, path, event_map, rec=False, auto_add=False):
|
||||||
|
assert isinstance(path, str), \
|
||||||
|
f"path: expected {type('')}, got {type(path)}"
|
||||||
|
self.path = path
|
||||||
|
if isinstance(event_map, EventMap):
|
||||||
|
self.event_map = event_map
|
||||||
|
elif isinstance(event_map, dict):
|
||||||
|
self.event_map = EventMap(event_map)
|
||||||
|
else:
|
||||||
|
raise AssertionError(
|
||||||
|
f"event_map: expected {type(EventMap)} or {type(dict)}, "
|
||||||
|
f"got {type(event_map)}")
|
||||||
|
|
||||||
|
assert isinstance(rec, bool), \
|
||||||
|
f"rec: expected {type(bool)}, got {type(rec)}"
|
||||||
|
self.rec = rec
|
||||||
|
assert isinstance(auto_add, bool), \
|
||||||
|
f"auto_add: expected {type(bool)}, got {type(auto_add)}"
|
||||||
|
self.auto_add = auto_add
|
||||||
|
|
||||||
|
def event_notifier(self, wm, loop):
|
||||||
|
handler = pyinotify.ProcessEvent()
|
||||||
|
mask = False
|
||||||
|
for flag, values in self.event_map.get().items():
|
||||||
|
setattr(handler, f"process_{flag}", TaskList(values).execute)
|
||||||
|
if not mask:
|
||||||
|
mask = EventMap.flags[flag]
|
||||||
|
else:
|
||||||
|
mask = mask | EventMap.flags[flag]
|
||||||
|
|
||||||
|
wm.add_watch(self.path, mask, rec=self.rec, auto_add=self.auto_add,
|
||||||
|
do_glob=True)
|
||||||
|
return pyinotify.AsyncioNotifier(wm, loop, default_proc_fun=handler)
|
||||||
|
|
||||||
|
|
||||||
class FileManager:
|
class FileManager:
|
||||||
def __init__(self, rules, auto_create=True, rec=False,
|
def __init__(self, rules, auto_create=True, rec=False,
|
||||||
logname="FileManager"):
|
logname="FileManager"):
|
||||||
@@ -168,7 +260,7 @@ class FileManager:
|
|||||||
self._rec = rec
|
self._rec = rec
|
||||||
self._log = logging.getLogger(logname)
|
self._log = logging.getLogger(logname)
|
||||||
|
|
||||||
async def job(self, event, task_id):
|
async def task(self, event, task_id):
|
||||||
path = event.pathname
|
path = event.pathname
|
||||||
match = None
|
match = None
|
||||||
for rule in self._rules:
|
for rule in self._rules:
|
||||||
@@ -223,26 +315,30 @@ class FileManager:
|
|||||||
self._log.warning(f"{task_id}: no rule matches path '{path}'")
|
self._log.warning(f"{task_id}: no rule matches path '{path}'")
|
||||||
|
|
||||||
|
|
||||||
class ExecList:
|
class PyinotifydConfig:
|
||||||
def __init__(self):
|
def __init__(self, watches=[], loglevel=logging.INFO, shutdown_timeout=30):
|
||||||
self._list = []
|
if not isinstance(watches, list):
|
||||||
|
watches = [watches]
|
||||||
|
|
||||||
def add(self, func):
|
self.set_watches(watches)
|
||||||
self._list.append(func)
|
|
||||||
|
|
||||||
def remove(self, func):
|
assert isinstance(loglevel, int), \
|
||||||
self._list.remove(func)
|
f"loglevel: expected {type(int)}, got {type(loglevel)}"
|
||||||
|
self.loglevel = loglevel
|
||||||
|
|
||||||
def run(self, event):
|
assert isinstance(shutdown_timeout, int), \
|
||||||
for func in self._list:
|
f"shutdown_timeout: expected {type(int)}, " \
|
||||||
func(event)
|
f"got {type(shutdown_timeout)}"
|
||||||
|
self.shutdown_timeout = shutdown_timeout
|
||||||
|
|
||||||
|
def add_watch(self, *args, **kwargs):
|
||||||
|
self.watches.append(Watch(*args, **kwargs))
|
||||||
|
|
||||||
def add_mask(new_mask, current_mask=False):
|
def set_watches(self, watches):
|
||||||
if not current_mask:
|
self.watches = []
|
||||||
return new_mask
|
for watch in watches:
|
||||||
else:
|
assert isinstance(watch, Watch), \
|
||||||
return current_mask | new_mask
|
f"watches: expected {type(Watch)}, got {type(watch)}"
|
||||||
|
|
||||||
|
|
||||||
async def shutdown(timeout=30):
|
async def shutdown(timeout=30):
|
||||||
@@ -290,18 +386,16 @@ def main():
|
|||||||
print(f"pyinotifyd ({__version__})")
|
print(f"pyinotifyd ({__version__})")
|
||||||
sys.exit(0)
|
sys.exit(0)
|
||||||
|
|
||||||
cfg = {"watches": [],
|
|
||||||
"loglevel": logging.INFO,
|
|
||||||
"shutdown_timeout": 30}
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
cfg_vars = {"pyinotifyd_config": cfg}
|
cfg = {}
|
||||||
with open(args.config, "r") as c:
|
with open(args.config, "r") as c:
|
||||||
exec(c.read(), globals(), cfg_vars)
|
exec(c.read(), globals(), cfg)
|
||||||
|
cfg = cfg["pyinotifyd_config"]
|
||||||
cfg.update(cfg_vars["pyinotifyd_config"])
|
assert isinstance(cfg, PyinotifydConfig), \
|
||||||
|
f"pyinotifyd_config: expected {type(PyinotifydConfig)}, " \
|
||||||
|
f"got {type(cfg)}"
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"error in config file: {e}")
|
logging.exception(f"error in config file: {e}")
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
console = logging.StreamHandler()
|
console = logging.StreamHandler()
|
||||||
@@ -310,52 +404,20 @@ def main():
|
|||||||
console.setFormatter(formatter)
|
console.setFormatter(formatter)
|
||||||
|
|
||||||
if args.debug:
|
if args.debug:
|
||||||
cfg["loglevel"] = logging.DEBUG
|
loglevel = logging.DEBUG
|
||||||
|
else:
|
||||||
|
loglevel = cfg.loglevel
|
||||||
|
|
||||||
root_logger = logging.getLogger()
|
root_logger = logging.getLogger()
|
||||||
root_logger.setLevel(cfg["loglevel"])
|
root_logger.setLevel(loglevel)
|
||||||
root_logger.addHandler(console)
|
root_logger.addHandler(console)
|
||||||
|
|
||||||
watchable_flags = pyinotify.EventsCodes.OP_FLAGS
|
|
||||||
watchable_flags.update(pyinotify.EventsCodes.EVENT_FLAGS)
|
|
||||||
|
|
||||||
wm = pyinotify.WatchManager()
|
wm = pyinotify.WatchManager()
|
||||||
loop = asyncio.get_event_loop()
|
loop = asyncio.get_event_loop()
|
||||||
notifiers = []
|
notifiers = []
|
||||||
for watchcfg in cfg["watches"]:
|
for watch in cfg.watches:
|
||||||
watch = {"path": "",
|
logging.info(f"start watching '{watch.path}'")
|
||||||
"rec": False,
|
notifiers.append(watch.event_notifier(wm, loop))
|
||||||
"auto_add": False,
|
|
||||||
"event_map": {}}
|
|
||||||
watch.update(watchcfg)
|
|
||||||
if not watch["path"]:
|
|
||||||
continue
|
|
||||||
|
|
||||||
mask = False
|
|
||||||
handler = pyinotify.ProcessEvent()
|
|
||||||
for flag, values in watch["event_map"].items():
|
|
||||||
if flag not in watchable_flags or values is None:
|
|
||||||
continue
|
|
||||||
|
|
||||||
if not isinstance(values, list):
|
|
||||||
values = [values]
|
|
||||||
|
|
||||||
mask = add_mask(pyinotify.EventsCodes.ALL_FLAGS[flag], mask)
|
|
||||||
exec_list = ExecList()
|
|
||||||
for value in values:
|
|
||||||
assert callable(value), \
|
|
||||||
f"event_map['{flag}']: expected callable, " \
|
|
||||||
f"got {type(value)}"
|
|
||||||
exec_list.add(value)
|
|
||||||
|
|
||||||
setattr(handler, f"process_{flag}", exec_list.run)
|
|
||||||
|
|
||||||
logging.info(f"start watching {watch['path']}")
|
|
||||||
wm.add_watch(
|
|
||||||
watch["path"], mask, rec=watch["rec"], auto_add=watch["auto_add"],
|
|
||||||
do_glob=True)
|
|
||||||
notifiers.append(pyinotify.AsyncioNotifier(
|
|
||||||
wm, loop, default_proc_fun=handler))
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
loop.run_forever()
|
loop.run_forever()
|
||||||
@@ -365,7 +427,7 @@ def main():
|
|||||||
for notifier in notifiers:
|
for notifier in notifiers:
|
||||||
notifier.stop()
|
notifier.stop()
|
||||||
|
|
||||||
loop.run_until_complete(shutdown(timeout=cfg["shutdown_timeout"]))
|
loop.run_until_complete(shutdown(timeout=cfg.shutdown_timeout))
|
||||||
loop.close()
|
loop.close()
|
||||||
|
|
||||||
sys.exit(0)
|
sys.exit(0)
|
||||||
|
|||||||
Reference in New Issue
Block a user