fawkes/README.md

78 wiersze
3.5 KiB
Markdown
Czysty Zwykły widok Historia

Fawkes
------
2020-06-29 01:22:27 +00:00
2020-07-27 01:38:40 +00:00
Fawkes is a privacy protection system developed by researchers at [SANDLab](https://sandlab.cs.uchicago.edu/), University of Chicago. For more information about the project, please refer to our project [webpage](https://sandlab.cs.uchicago.edu/fawkes/). Contact us at fawkes-team@googlegroups.com.
2020-06-29 01:22:27 +00:00
2020-07-23 17:55:17 +00:00
We published an academic paper to summarize our work "[Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models](https://www.shawnshan.com/files/publication/fawkes.pdf)" at *USENIX Security 2020*.
2020-06-29 01:22:27 +00:00
2020-07-27 00:20:24 +00:00
NEW! If you would like to use Fawkes to protect your identity, please check out our software and binary implementation on the [website](https://sandlab.cs.uchicago.edu/fawkes/#code).
2020-07-15 00:34:51 +00:00
Copyright
---------
2020-07-15 14:46:43 +00:00
This code is intended only for personal privacy protection or academic research.
2020-07-15 00:34:51 +00:00
We are currently exploring the filing of a provisional patent on the Fawkes algorithm.
Usage
-----
`$ fawkes`
Options:
2020-07-30 04:01:02 +00:00
* `-m`, `--mode` : the tradeoff between privacy and perturbation size. Select from `min`, `low`, `mid`, `high`. The higher the mode is, the more perturbation will add to the image and provide stronger protection.
2020-07-30 04:02:34 +00:00
* `-d`, `--directory` : the directory with images to run protection.
* `-g`, `--gpu` : the GPU id when using GPU for optimization.
* `--batch-size` : number of images to run optimization together. Change to >1 only if you have extremely powerful compute power.
2020-07-30 04:03:14 +00:00
* `--format` : format of the output image (png or jpg).
when --mode is `custom`:
* `--th` : perturbation threshold
* `--max-step` : number of optimization steps to run
* `--lr` : learning rate for the optimization
* `--feature-extractor` : name of the feature extractor to use
* `--separate_target` : whether select separate targets for each faces in the diectory.
2020-06-29 01:22:27 +00:00
### Example
2020-06-29 01:22:27 +00:00
2020-07-30 04:01:36 +00:00
`fawkes -d ./imgs --mode min`
### Tips
- The perturbation generation takes ~60 seconds per image on a CPU machine, and it would be much faster on a GPU machine. Use `batch-size=1` on CPU and `batch-size>1` on GPUs.
2020-07-27 01:38:40 +00:00
- Turn on separate target if the images in the directory belong to different people, otherwise, turn it off.
- Run on GPU. The current Fawkes package and binary does not support GPU. To use GPU, you need to clone this, install the required packages in `setup.py`, and replace tensorflow with tensorflow-gpu. Then you can run Fawkes by `python3 fawkes/protection.py [args]`.
2020-07-14 00:06:33 +00:00
2020-08-01 17:14:27 +00:00
![Obama in different settings](https://sandlab.cs.uchicago.edu/fawkes/files/obama.png)
2020-07-14 00:06:33 +00:00
### How do I know my images are secure?
2020-07-27 01:38:40 +00:00
We are actively working on this. Python scripts that can test the protection effectiveness will be ready shortly.
Quick Installation
------------------
2020-07-26 19:47:03 +00:00
Install from [PyPI](https://pypi.org/project/fawkes/):
```
pip install fawkes
```
If you don't have root privilege, please try to install on user namespace: `pip install --user fawkes`.
2020-08-01 02:44:22 +00:00
Contribute to Fawkes
--------------------
2020-08-01 02:44:42 +00:00
If you would like to contribute to make Fawkes software better, please checkout our [project list](https://github.com/Shawn-Shan/fawkes/projects/1) which contains our TODOs. If you are confident in helping, please open a pull requests and explain the plans for your changes. We will try our best to approve asap, and once approved, you can work on it.
2020-07-14 00:06:05 +00:00
2020-06-29 01:22:27 +00:00
### Citation
```
@inproceedings{shan2020fawkes,
title={Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models},
author={Shan, Shawn and Wenger, Emily and Zhang, Jiayun and Li, Huiying and Zheng, Haitao and Zhao, Ben Y},
booktitle="Proc. of USENIX Security",
year={2020}
}
```