Commit 2f0286fd authored by Alex's avatar Alex

Merge Upstream master

parents 4028ee18 1cd42304
### OSX ###
*.DS_Store
.AppleDouble
.LSOverride
# Icon must end with two \r
Icon
# Thumbnails
._*
# Files that might appear in the root of a volume
.DocumentRevisions-V100
.fseventsd
.Spotlight-V100
.TemporaryItems
.Trashes
.VolumeIcon.icns
.com.apple.timemachine.donotpresent
# Directories potentially created on remote AFP share
.AppleDB
.AppleDesktop
Network Trash Folder
Temporary Items
.apdisk
### Python ###
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
.pytest_cache/
nosetests.xml
coverage.xml
*.cover
.hypothesis/
# Translations
*.mo
*.pot
# Flask stuff:
flask/
instance/settings.py
.webassets-cache
# Scrapy stuff:
.scrapy
# celery beat schedule file
celerybeat-schedule.*
# Node
node_modules
npm-debug.log
# Docker
Dockerfile*
docker-compose*
.dockerignore
# Git
.git
.gitattributes
.gitignore
# Vscode
.vscode
*.code-workspace
# Others
.lgtm.yml
.travis.yml
ENVIRONMENT=development
PDA_DB_HOST=localhost
PDA_DB_NAME=powerdnsadmin
PDA_DB_USER=powerdnsadmin
PDA_DB_PASSWORD=changeme
PDA_DB_PORT=3306
PDNS_HOST=localhost
PDNS_API_KEY=changeme
......@@ -25,22 +25,17 @@ nosetests.xml
flask
config.py
configs/production.py
logfile.log
settings.json
advanced_settings.json
idp.crt
log.txt
pdns.db
idp.crt
*.bak
db_repository/*
upload/avatar/*
tmp/*
.ropeproject
.sonarlint/*
pdns.db
node_modules
powerdnsadmin/static/generated
.webassets-cache
app/static/generated
.venv*
.pytest_cache
--*.modules-folder "./app/static/node_modules"
--*.modules-folder "./powerdnsadmin/static/node_modules"
......@@ -12,25 +12,21 @@ A PowerDNS web interface with advanced features.
- User access management based on domain
- User activity logging
- Support Local DB / SAML / LDAP / Active Directory user authentication
- Support Google / Github / OpenID OAuth
- Support Google / Github / Azure / OpenID OAuth
- Support Two-factor authentication (TOTP)
- Dashboard and pdns service statistics
- DynDNS 2 protocol support
- Edit IPv6 PTRs using IPv6 addresses directly (no more editing of literal addresses!)
- limited API for manipulating zones and records
- Limited API for manipulating zones and records
### Running PowerDNS-Admin
There are several ways to run PowerDNS-Admin. Following is a simple way to start PowerDNS-Admin with docker in development environment which has PowerDNS-Admin, PowerDNS server and MySQL Back-End Database.
There are several ways to run PowerDNS-Admin. Following is a simple way to start PowerDNS-Admin using Docker
Step 1: Changing configuration
Step 1: Update the configuration
The configuration file for development environment is located at `configs/development.py`, you can override some configs by editing the `.env` file.
Edit the `docker-compose.yml` file to update the database connection string in `SQLALCHEMY_DATABASE_URI`. Other environment variables are mentioned in the [legal_envvars](https://github.com/ngoduykhanh/PowerDNS-Admin/blob/master/configs/docker_config.py#L5-L37).
Step 2: Build docker images
```$ docker-compose build```
Step 3: Start docker containers
Step 2: Start docker container
```$ docker-compose up```
......@@ -40,142 +36,3 @@ You can now access PowerDNS-Admin at url http://localhost:9191
### Screenshots
![dashboard](https://user-images.githubusercontent.com/6447444/44068603-0d2d81f6-9fa5-11e8-83af-14e2ad79e370.png)
### Running tests
**NOTE:** Tests will create `__pycache__` folders which will be owned by root, which might be issue during rebuild
thus (e.g. invalid tar headers message) when such situation occurs, you need to remove those folders as root
1. Build images
```
docker-compose -f docker-compose-test.yml build
```
2. Run tests
```
docker-compose -f docker-compose-test.yml up
```
3. Rerun tests
```
docker-compose -f docker-compose-test.yml down
```
To teardown previous environment
```
docker-compose -f docker-compose-test.yml up
```
To run tests again
### API Usage
1. run docker image docker-compose up, go to UI http://localhost:9191, at http://localhost:9191/swagger is swagger API specification
2. click to register user, type e.g. user: admin and password: admin
3. login to UI in settings enable allow domain creation for users,
now you can create and manage domains with admin account and also ordinary users
4. Encode your user and password to base64, in our example we have user admin and password admin so in linux cmd line we type:
```
someuser@somehost:~$echo -n 'admin:admin'|base64
YWRtaW46YWRtaW4=
```
we use generated output in basic authentication, we authenticate as user,
with basic authentication, we can create/delete/get zone and create/delete/get/update apikeys
creating domain:
```
curl -L -vvv -H 'Content-Type: application/json' -H 'Authorization: Basic YWRtaW46YWRtaW4=' -X POST http://localhost:9191/api/v1/pdnsadmin/zones --data '{"name": "yourdomain.com.", "kind": "NATIVE", "nameservers": ["ns1.mydomain.com."]}'
```
creating apikey which has Administrator role, apikey can have also User role, when creating such apikey you have to specify also domain for which apikey is valid:
```
curl -L -vvv -H 'Content-Type: application/json' -H 'Authorization: Basic YWRtaW46YWRtaW4=' -X POST http://localhost:9191/api/v1/pdnsadmin/apikeys --data '{"description": "masterkey","domains":[], "role": "Administrator"}'
```
call above will return response like this:
```
[{"description": "samekey", "domains": [], "role": {"name": "Administrator", "id": 1}, "id": 2, "plain_key": "aGCthP3KLAeyjZI"}]
```
we take plain_key and base64 encode it, this is the only time we can get API key in plain text and save it somewhere:
```
someuser@somehost:~$echo -n 'aGCthP3KLAeyjZI'|base64
YUdDdGhQM0tMQWV5alpJ
```
We can use apikey for all calls specified in our API specification (it tries to follow powerdns API 1:1, only tsigkeys endpoints are not yet implemented), don't forget to specify Content-Type!
getting powerdns configuration:
```
curl -L -vvv -H 'Content-Type: application/json' -H 'X-API-KEY: YUdDdGhQM0tMQWV5alpJ' -X GET http://localhost:9191/api/v1/servers/localhost/config
```
creating and updating records:
```
curl -X PATCH -H 'Content-Type: application/json' --data '{"rrsets": [{"name": "test1.yourdomain.com.","type": "A","ttl": 86400,"changetype": "REPLACE","records": [ {"content": "192.0.2.5", "disabled": false} ]},{"name": "test2.yourdomain.com.","type": "AAAA","ttl": 86400,"changetype": "REPLACE","records": [ {"content": "2001:db8::6", "disabled": false} ]}]}' -H 'X-API-Key: YUdDdGhQM0tMQWV5alpJ' http://127.0.0.1:9191/api/v1/servers/localhost/zones/yourdomain.com.
```
getting domain:
```
curl -L -vvv -H 'Content-Type: application/json' -H 'X-API-KEY: YUdDdGhQM0tMQWV5alpJ' -X GET http://localhost:9191/api/v1/servers/localhost/zones/yourdomain.com
```
list zone records:
```
curl -H 'Content-Type: application/json' -H 'X-API-Key: YUdDdGhQM0tMQWV5alpJ' http://localhost:9191/api/v1/servers/localhost/zones/yourdomain.com
```
add new record:
```
curl -H 'Content-Type: application/json' -X PATCH --data '{"rrsets": [ {"name": "test.yourdomain.com.", "type": "A", "ttl": 86400, "changetype": "REPLACE", "records": [ {"content": "192.0.5.4", "disabled": false } ] } ] }' -H 'X-API-Key: YUdDdGhQM0tMQWV5alpJ' http://localhost:9191/api/v1/servers/localhost/zones/yourdomain.com | jq .
```
update record:
```
curl -H 'Content-Type: application/json' -X PATCH --data '{"rrsets": [ {"name": "test.yourdomain.com.", "type": "A", "ttl": 86400, "changetype": "REPLACE", "records": [ {"content": "192.0.2.5", "disabled": false, "name": "test.yourdomain.com.", "ttl": 86400, "type": "A"}]}]}' -H 'X-API-Key: YUdDdGhQM0tMQWV5alpJ' http://localhost:9191/api/v1/servers/localhost/zones/yourdomain.com | jq .
```
delete record:
```
curl -H 'Content-Type: application/json' -X PATCH --data '{"rrsets": [ {"name": "test.yourdomain.com.", "type": "A", "ttl": 86400, "changetype": "DELETE"}]}' -H 'X-API-Key: YUdDdGhQM0tMQWV5alpJ' http://localhost:9191/api/v1/servers/localhost/zones/yourdomain.com | jq
```
### Generate ER diagram
```
apt-get install python-dev graphviz libgraphviz-dev pkg-config
```
```
pip install graphviz mysqlclient ERAlchemy
```
```
docker-compose up -d
```
```
source .env
```
```
eralchemy -i 'mysql://${PDA_DB_USER}:${PDA_DB_PASSWORD}@'$(docker inspect powerdns-admin-mysql|jq -jr '.[0].NetworkSettings.Networks.powerdnsadmin_default.IPAddress')':3306/powerdns_admin' -o /tmp/output.pdf
```
from werkzeug.contrib.fixers import ProxyFix
from flask import Flask, request, session, redirect, url_for
from flask_login import LoginManager
from flask_sqlalchemy import SQLAlchemy as SA
from flask_migrate import Migrate
from authlib.flask.client import OAuth as AuthlibOAuth
from sqlalchemy.exc import OperationalError
from flask_seasurf import SeaSurf
### SYBPATCH ###
from app.customboxes import customBoxes
### SYBPATCH ###
# subclass SQLAlchemy to enable pool_pre_ping
class SQLAlchemy(SA):
def apply_pool_defaults(self, app, options):
SA.apply_pool_defaults(self, app, options)
options["pool_pre_ping"] = True
from app.assets import assets
app = Flask(__name__)
app.config.from_object('config')
app.wsgi_app = ProxyFix(app.wsgi_app)
csrf = SeaSurf(app)
assets.init_app(app)
#### CONFIGURE LOGGER ####
from app.lib.log import logger
logging = logger('powerdns-admin', app.config['LOG_LEVEL'], app.config['LOG_FILE']).config()
login_manager = LoginManager()
login_manager.init_app(app)
db = SQLAlchemy(app) # database
migrate = Migrate(app, db) # flask-migrate
authlib_oauth_client = AuthlibOAuth(app) # authlib oauth
if app.config.get('SAML_ENABLED') and app.config.get('SAML_ENCRYPT'):
from app.lib import certutil
if not certutil.check_certificate():
certutil.create_self_signed_cert()
from app import models
from app.blueprints.api import api_blueprint
app.register_blueprint(api_blueprint, url_prefix='/api/v1')
from app import views
# "boxId":("title","filter")
class customBoxes:
boxes = {"reverse": (" ", " "), "ip6arpa": ("ip6","%.ip6.arpa"), "inaddrarpa": ("in-addr","%.in-addr.arpa")}
order = ["reverse", "ip6arpa", "inaddrarpa"]
import logging
class logger(object):
def __init__(self, name, level, logfile):
self.name = name
self.level = level
self.logfile = logfile
def config(self):
# define logger and set logging level
logger = logging.getLogger()
if self.level == 'CRITICAL':
level = logging.CRITICAL
elif self.level == 'ERROR':
level = logging.ERROR
elif self.level == 'WARNING':
level = logging.WARNING
elif self.level == 'DEBUG':
level = logging.DEBUG
else:
level = logging.INFO
logger.setLevel(level)
# set request requests module log level
logging.getLogger("requests").setLevel(logging.CRITICAL)
if self.logfile:
# define handler to log into file
file_log_handler = logging.FileHandler(self.logfile)
logger.addHandler(file_log_handler)
# define logging format for file
file_formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
file_log_handler.setFormatter(file_formatter)
# define handler to log into console
stderr_log_handler = logging.StreamHandler()
logger.addHandler(stderr_log_handler)
# define logging format for console
console_formatter = logging.Formatter('[%(asctime)s] [%(levelname)s] | %(message)s')
stderr_log_handler.setFormatter(console_formatter)
return logging.getLogger(self.name)
import re
import json
import requests
import hashlib
import ipaddress
import os
from app import app
from distutils.version import StrictVersion
from urllib.parse import urlparse
from datetime import datetime, timedelta
from threading import Thread
from .certutil import KEY_FILE, CERT_FILE
import logging as logger
logging = logger.getLogger(__name__)
if app.config['SAML_ENABLED']:
from onelogin.saml2.auth import OneLogin_Saml2_Auth
from onelogin.saml2.idp_metadata_parser import OneLogin_Saml2_IdPMetadataParser
idp_timestamp = datetime(1970, 1, 1)
idp_data = None
if 'SAML_IDP_ENTITY_ID' in app.config:
idp_data = OneLogin_Saml2_IdPMetadataParser.parse_remote(app.config['SAML_METADATA_URL'], entity_id=app.config.get('SAML_IDP_ENTITY_ID', None), required_sso_binding=app.config['SAML_IDP_SSO_BINDING'])
else:
idp_data = OneLogin_Saml2_IdPMetadataParser.parse_remote(app.config['SAML_METADATA_URL'], entity_id=app.config.get('SAML_IDP_ENTITY_ID', None))
if idp_data is None:
print('SAML: IDP Metadata initial load failed')
exit(-1)
idp_timestamp = datetime.now()
def get_idp_data():
global idp_data, idp_timestamp
lifetime = timedelta(minutes=app.config['SAML_METADATA_CACHE_LIFETIME'])
if idp_timestamp+lifetime < datetime.now():
background_thread = Thread(target=retrieve_idp_data)
background_thread.start()
return idp_data
def retrieve_idp_data():
global idp_data, idp_timestamp
if 'SAML_IDP_SSO_BINDING' in app.config:
new_idp_data = OneLogin_Saml2_IdPMetadataParser.parse_remote(app.config['SAML_METADATA_URL'], entity_id=app.config.get('SAML_IDP_ENTITY_ID', None), required_sso_binding=app.config['SAML_IDP_SSO_BINDING'])
else:
new_idp_data = OneLogin_Saml2_IdPMetadataParser.parse_remote(app.config['SAML_METADATA_URL'], entity_id=app.config.get('SAML_IDP_ENTITY_ID', None))
if new_idp_data is not None:
idp_data = new_idp_data
idp_timestamp = datetime.now()
print("SAML: IDP Metadata successfully retrieved from: " + app.config['SAML_METADATA_URL'])
else:
print("SAML: IDP Metadata could not be retrieved")
if 'TIMEOUT' in app.config.keys():
TIMEOUT = app.config['TIMEOUT']
else:
TIMEOUT = 10
def auth_from_url(url):
auth = None
parsed_url = urlparse(url).netloc
if '@' in parsed_url:
auth = parsed_url.split('@')[0].split(':')
auth = requests.auth.HTTPBasicAuth(auth[0], auth[1])
return auth
def fetch_remote(remote_url, method='GET', data=None, accept=None, params=None, timeout=None, headers=None):
if data is not None and type(data) != str:
data = json.dumps(data)
if timeout is None:
timeout = TIMEOUT
verify = False
our_headers = {
'user-agent': 'powerdnsadmin/0',
'pragma': 'no-cache',
'cache-control': 'no-cache'
}
if accept is not None:
our_headers['accept'] = accept
if headers is not None:
our_headers.update(headers)
r = requests.request(
method,
remote_url,
headers=headers,
verify=verify,
auth=auth_from_url(remote_url),
timeout=timeout,
data=data,
params=params
)
try:
if r.status_code not in (200, 201, 204, 400, 422):
r.raise_for_status()
except Exception as e:
msg = "Returned status {0} and content {1}"
logging.error(msg.format(r.status_code, r.content))
raise RuntimeError('Error while fetching {0}'.format(remote_url))
return r
def fetch_json(remote_url, method='GET', data=None, params=None, headers=None):
r = fetch_remote(remote_url, method=method, data=data, params=params, headers=headers,
accept='application/json; q=1')
if method == "DELETE":
return True
if r.status_code == 204:
return {}
try:
assert('json' in r.headers['content-type'])
except Exception as e:
raise RuntimeError('Error while fetching {0}'.format(remote_url)) from e
# don't use r.json here, as it will read from r.text, which will trigger
# content encoding auto-detection in almost all cases, WHICH IS EXTREMELY
# SLOOOOOOOOOOOOOOOOOOOOOOW. just don't.
data = None
try:
data = json.loads(r.content.decode('utf-8'))
except Exception as e:
raise RuntimeError('Error while loading JSON data from {0}'.format(remote_url)) from e
return data
def display_record_name(data):
record_name, domain_name = data
if record_name == domain_name:
return '@'
else:
return re.sub('\.{}$'.format(domain_name), '', record_name)
def display_master_name(data):
"""
input data: "[u'127.0.0.1', u'8.8.8.8']"
"""
matches = re.findall(r'\'(.+?)\'', data)
return ", ".join(matches)
def display_time(amount, units='s', remove_seconds=True):
"""
Convert timestamp to normal time format
"""
amount = int(amount)
INTERVALS = [(lambda mlsec:divmod(mlsec, 1000), 'ms'),
(lambda seconds:divmod(seconds, 60), 's'),
(lambda minutes:divmod(minutes, 60), 'm'),
(lambda hours:divmod(hours, 24), 'h'),
(lambda days:divmod(days, 7), 'D'),
(lambda weeks:divmod(weeks, 4), 'W'),
(lambda years:divmod(