Hi there started on my first zammad pr with a view to adding s3 provider support to the application for cloud/docker/kube deployments where persistence is not on the FS and you dont want to store binary objects in the database.
Its been a while since i did some rails, and am looking for a pointer as to how to get the settings coffee to take fields access_key_id access_key_secret and _endpoint_url
Its also made slightly more complicated by the fact that DB and FS dont currently take any input so i dont have an example of how to implement this (ie does it need a new model?) how to change the ui in an acceptable manner to take the new fields.
another perhaps better option would be to rely on ENV vars rather than db configuration for the s3 provider but again id like some direction from the devs in the community
I realise this is more a rails general question but as this is a community project I would prefer direction from the owners.
Hey @rosscdh - what a first post in the community. I love it Welcome to the Zammad community.
I checked the current implementation and the Setting storage_provider JS controller is inheriting from App.SettingsAreaItem and renders a custom template (youâve already updated in the PR). A Setting Item is handled as a simple one value entity without any relation to other Settings. This helps us to add simple Settings without much hassle.
In your context you want to have a more rich Setting with logic and dependencies. So instead of using the App.SettingsAreaItem as a controller you want to build a custom one like e.g. App.SettingsAreaLogo. This way you get the full control of rendering your setting block. App.SettingsAreaLogo actually is a pretty good example here with a rich functionality.
We have no perfect solution for storing access variables yet and relied on dedicated Settings-Records with custom rendering in the past.
So what you could do is migrate the Setting âelementâ from inheriting from App.SettingsAreaItem to a custom version of App.SettingsAreaLogo e.g. App.SettingsAreaStorage. This would extend the current UI and would render a nice UI where the Admin could choose between the storage providers the same way as it is right now. There would be no visual difference from the current input except that S3 would be available. But when the Admin selects S3 the additionally required attribute input fields are shown. Once the Admin clicks submit the new App.SettingsAreaStorage controller kicks in, checks which provider is selected and also checks for additional attributes if needed. These custom attributes/additional access variables could then be stored via Setting.set('key', 'value').
We could let the S3 storage backend fall back on ENVs if the access variable Settings are blank.
Some more backround information you might be interested in:
For the frontend we rely on an outdated framework called SpineJS with some additional custom logic. You can find more information about it here: https://spine.github.io/
Iâd also recommend to open up a bottle of wine and spend some time on looking how other JS controllers work.
The JS template engine we use is called and outdated as well. You can find more information about it here: https://github.com/sstephenson/eco
Hope that helps and gives you a direction. Let me know if you have any further questions.
Hey there, thanks for the info and of course the welcome!
Yep env vars are def the way to go; its pretty much how most(all?) other cloud oriented systems work so perhaps dont even need to change the form (tho should add info/checks about whether the env vars are set or not). there are actually security concerns about storing this kind of info in a db and not injecting as envs (12 factor app)
no probs re outdated libs, it happens; difficult to keep up with js libs in the last 5 years; am starting to recommend https://svelte.dev/ to anyone listening on that topic tho.
Im really impressed by zammad its a fantastic system but needs a little work on the getting it cloud ready epic; and am keen to assist if i can
May I ask another question about database and database migrations or rather another entry?
Thanks nevermind the db migration (i wanted to make sure there were no anti-patterns with dbs like there are with discourse forum) but my test migration went smoothly.
I imagine as the app has a js frontend we woudl want to upload the file with a presigned url? instead of serverside?
This simplifies many items and puts the onus on the frontend and the backend simply provides an endpoint that calculates a presigned upload url that the client then can upload directly to. Saving a base64 encoded request to the server->upload to s3?
Does that make sense? (also implies im going to have to get to know spine a bit better i guess)
Iâve updated the PR to cater to this; and once Iâve gotten my dev env setup ill write some tests.
Ideally (as i have 0 xp with spine and the implementation here) Id like to provide the endpoint and get some help with the FE work?
Actually Iâm not that experienced with S3 and the workflow. Would you mind sharing/pointing me to a resource that matches the Zammad use case or elaborate it to me on what the steps are. That would be a great help to me. The goal would be to provide developers a transparent API that works the same for any storage backend without the need to check for the currently active one.
JFI: There should be no business logic in the JS App / Frontend. This makes it harder to implement different clients (e.g. Mobile/API integrations etc.).
JFI2: We planned to evaluate Rails ActiveStorage and see how/if it covers our requirements (duplicate prevention etc.) but havenât had the chance yet. AFAIK it also supports S3 and other cloud storage providers.
file with backend client goes client â POST fat binary â backend â s3 (this is kinda inefficient especially since the destination is actually s3 not the server)
instead client â POST get presigned url â client uploads binary direct to s3 (skipping the network and overhead of having to upload the file 2x once to the server and the server again to s3 double network traffic
the store provider would then provide all the ânormalâ interfaces so that data can be moved from DB and File to S3 but⌠AS WELL AS and endpoint to calculate the presigned url (ive already laid the basis for this n my PR) the web client would POST request a presigned url and then upload direct to s3 instead of via the server backend (saving bandwith and time)
no business logic in the front end - client selects file as normal; and on change a presigned url is requested from backend. client then transfers data straight to s3 using the presigned url and extra fields provided by the presigned endpoint
as with current the incoming request provides things like filename, guessed content type and can be prefixed with random sha as is currently done
ActiveStorage seems to be the current best practice for this kind of thing ⌠seems to be a good choice and also appears to support presigning of urls transparently.
Hm while the benefits of direct upload look appealing I see major changes required to integrate it in the generic storage backend architecture we currently have. I estimate a required rewrite of about 70-80% of the current implementation. We have made bad experience with such âfollow upâ or âextended scopeâ changes. Resulting in broken or never finished implementation. Therefore Iâd recommend to take the direct route first and add the S3 integration without direct upload first. Afterwards we can pick up the work on the ActiveStorage migration or change the generic storage backend implementation to support direct upload as well.
Sorry, that might have sounded different than I meant it. English is not my mother tongue and I was a bit in a hurry.
Next try â please donât get me wrong
70-80% is just a rough estimation based on my experience with the codebase, the implementation and past experience with such changes. Let me know whatâs unclear and I try to elaborate in more detail.
By âextended scopeâ I mean that weâve made bad experience when tasks started with e.g. âLetâs add S3 supportâ and ended with âwe should refactor/rewrite the storage logic/layerâ. However, I donât intend to judge your skill based on the experience I/we made with other people. If you think you can make it donât let a random stranger on the internet tell you that you canât.
Iâm happy to help and try to free as much time as I can for giving guidance and answer questions. Again based on past experience with other people, I spend much time on guidance and the changes were never finished. That can be frustrating. Itâs just that we need to think about various use cases and setups â and future maintenance. Donât want to sound rude or picky.
Hi Thorsten, kein Thema, du kannst gerne auf Deutsch mir schrieben.
Aber schriftlich ist mein Deutsch schrecklich, dazu entschuldigung wenn ich auf English zuruck schreiben.
Yeah sure I can understand your concerns, working on quite a few projects myself I quite well know the feeling.
While I dont quite agree its a 70% rewrite as its basically the same interfaces as for the other 2 backends plus a single presign route to grant access to upload to the js client.
The standard interface methods woull be used only to migrate from db/file to s3
so really its only 1 new route; and some javascript of which there are tonnes of working examples in the wild. and the store interfaces for move and delete which are made quite simple by the aws-sdk s3 gem
Hey @rosscdh - thanks for the offer but since english is the primary language around here Iâd rather stick to it â so that as much people as possible can have their take aways⌠hopefully
As you already pointed out there needs to be a dedicated logic to generate the presign URLs. That collides with the specifications of our backend/module/plugin/abstraction layer. We have various places where we have those kind of layers (Channel/Authentication/Import/etc.). The goal is to have one interface that work for all implementations/integrations/provider/plugins/etc. of the corresponding area.
As we now noticed the current implementation of the storage layer would work for a âdouble uploadâ implementation but wouldnât work for the âdirect uploadâ approach. Thatâs too bad. Therefore weâd need a new layer architecture that would also cover the âdirect uploadâ. Having a layer that covers only 80% of the implementation of a backend/provider/etc. is not an option for us. It would require ifs and other conditional logic all over the place. So whenever new backends/providers/etc. would be added those conditionals would grow and grow. Covering new cases all over the application. That would make the (part of the) application unmaintainable over time. A typical symptom of this anti-pattern is a Shotgun surgery. We have had enough of those in the past and want to avoid them for the future. To do this weâd need a new module/backend/provider layer.
Looking at the implementation of the current module/backend/provider layer I see the need of a major rewrite. Weâd need to change the implementation of the frontend and backend. Thatâs why I think itâs a 70-80% rewrite.
It maybe helps if you imagine adding/building the S3 without adding if statements or dedicated checks for S3, class names or method checks.