#metabase #superset #nginx
shiny proxy
- use oidc information expression can be used to share a volume between users within the same team.
- keycloak integration
- sharing containers among users
- custom html
- can set the container user via --docker-user
→ Superset
Overall superset does not support base url, so it's a pain to integrate with SP
→ Metabase
has CSP protection, so it does not work in iframe. The enterprise version can bypass the limitation, likely for a feature to embed dashboards in other applications.
still some button are missing (admin/settings) so there is some work to figure out why
there is no anonymous access. So the user will have to login two times. Too bad.
→ disable CSP
- It is very easy to build a custom metabase and removing that security
- Leverage nginx reverse proxy to hide the CSP headers
The second option looks better:
- no MT build needed and patch to maintain
- general solution reusable for other tool to integrate
- reuse the nginx to disable the login form, see below
→ skip login
One idea is to call the metabase login api and create a cookie, transfered by nginx.
- the h2 database would be preinit with a admin/admin user
- the entripoint would copy the db if not exist in the user mounted folder before starting MT
- that user/pass would be used for the api call by the script
OpenResty is an nginx distribution which includes the LuaJIT interpreter for Lua scripts
FROM openresty/openresty:buster-fat
RUN opm install ledgetech/lua-resty-http thapakazi/lua-resty-cookie
COPY default.conf /etc/nginx/conf.d/
COPY *.lua /usr/local/openresty/nginx/
COPY nginx.conf /usr/local/openresty/nginx/conf/nginx.conf
server {
listen 8080;
server_name your.metabase.domain;
location / {
access_by_lua_file gen_token.lua;
proxy_pass http://127.0.0.1:3000;
}
}
local cjson = require("cjson")
local httpc = require("resty.http").new()
local ck = require("resty.cookie")
local cookie, err = ck:new()
if not cookie then
ngx.log(ngx.ERR, err)
return
end
local field, err = cookie:get("metabase.SESSION")
if not field then
local res, err = httpc:request_uri("http://127.0.0.1:3000/api/session", {
method = "POST",
body = cjson.encode({
username = os.getenv("METABASE_USERNAME"),
password = os.getenv("METABASE_PASSWORD"),
}),
headers = {
["Content-Type"] = "application/json",
},
})
if not res then
ngx.log(ngx.ERR, "request failed:", err)
return
end
local data = cjson.decode(res.body)
local ok, err = cookie:set({
key = "metabase.SESSION",
value = data["id"],
path = "/",
domain = ngx.var.host,
httponly = true,
-- max_age = 1209600,
samesite = "Lax",
})
if not ok then
ngx.log(ngx.ERR, err)
return
end
end
→ enable concurrent connections
Sounds like we could run multiple instances of MT having the same db. For example sharing the db in the team folder, so that team members share their dashboards.
- h2 lilely supports // conn with
h2:file:./data/testdb;AUTO_SERVER=TRUE
- previously metabase was auto_server
→ resources management
→ volume access
Goal:
- In the user directory, files folders are rw across applications
- In the team directory, files and folders are rw across applications and members of the team
- When a volume is mounted in a container, if the folder does not yet exists, it is created with root user and overrides the folder bonded in the container. However of the folder exists at least in the docker image in the host, then it keeps its grant source and here he explicitly chown within the dockerfile -> this is not true with recent docker version
- the uid and primary gid of the container user is used to create files and folders, unless setid/setgid are set, then the owner/group is kept
- we could use sticky bit on other to let the container user change the folder user/group at init time (same behavior as
/usr/bin/passwd
command) - ideally all uid/gid should be the same across containers, but it is not possible (rstudio might use 101 while jupyter 102 and so on) While we can set the user from outside, the app might not work with it
- using volume allows to set the uid/gid but only with tmpfs, cifs or NFS; not bind mount
- it is possible to one liner to create and configure volumes
- acl on the host won't apply within the container
- if we were able to pre-create the folders on the host it would allow the user to write. Still the apps wouldn't be able to cross edit
- this approach works, however it is not supported by shinyproxy.
FROM ubuntu:22.04
RUN mkdir -p '/foo' ; chown '1001':'1001' '/foo'
# then
docker build -t nico:latest .
docker run -it --rm --user=1001:1001 --mount='source=volumeName,target=/foo,readonly=false' nico:latest ls -alrth /|grep foo
drwxr-xr-x 2 1001 1001 4.0K Sep 10 22:26 foo
also we could try to use docker rootless run by the 1000 user (which is used by Jupyter and rstudio)
eg to configure alt docker url
proxy.docker.url: URL and port on which to connect to the docker daemon, if not specified ShinyProxy tries to connect using the Unix socket of the Docker daemon