Accessing mounted volumes from docker for composer/npm installs on build time?
I'm looking for a better solution to install composer or npm packages inside of host volumes mounted via docker-compose.
In my docker-compose.yml, I have:
volumes:
- ./app:/var/www/app
...
And in my dockerfile, I would want to use this volume to install stuff:
VOLUME ["/var/www/app"]
RUN composer install -d /var/www/app
But as I understand it, the volumes mounted in docker-compose are not yet available when the container is being built from the dockerfile.
So my next attempt was doing it when the container is started:
CMD bash -c "composer install -d /var/www/app && /usr/sbin/apache2ctl -DFOREGROUND"
That worked at least, but it called the composer install everytime the container was run which is redundant.
So my current idea is to use a dedicated composer image, mounting composer.json into it to install, then copy the finished vendor from the composer container somewhere into the container and link where it is needed. Like this:
FROM composer as composer
COPY ./app/composer.json /app
COPY ./app/composer.lock /app
RUN composer install --ignore-platform-reqs --no-scripts
FROM library/ubuntu:jessie
# ... do other stuff with the main image ...
COPY --from=composer /app/vendor /var/www/composer/vendor
CMD bash -c "ln -s /var/www/composer/vendor /var/www/app/vendor && /usr/sbin/apache2ctl -DFOREGROUND"
But it still feels kinda workaround-ish for such an ordinary problem. Is there a better way to go about this or any known good-practice?
docker docker-compose composer-php dockerfile
add a comment |
I'm looking for a better solution to install composer or npm packages inside of host volumes mounted via docker-compose.
In my docker-compose.yml, I have:
volumes:
- ./app:/var/www/app
...
And in my dockerfile, I would want to use this volume to install stuff:
VOLUME ["/var/www/app"]
RUN composer install -d /var/www/app
But as I understand it, the volumes mounted in docker-compose are not yet available when the container is being built from the dockerfile.
So my next attempt was doing it when the container is started:
CMD bash -c "composer install -d /var/www/app && /usr/sbin/apache2ctl -DFOREGROUND"
That worked at least, but it called the composer install everytime the container was run which is redundant.
So my current idea is to use a dedicated composer image, mounting composer.json into it to install, then copy the finished vendor from the composer container somewhere into the container and link where it is needed. Like this:
FROM composer as composer
COPY ./app/composer.json /app
COPY ./app/composer.lock /app
RUN composer install --ignore-platform-reqs --no-scripts
FROM library/ubuntu:jessie
# ... do other stuff with the main image ...
COPY --from=composer /app/vendor /var/www/composer/vendor
CMD bash -c "ln -s /var/www/composer/vendor /var/www/app/vendor && /usr/sbin/apache2ctl -DFOREGROUND"
But it still feels kinda workaround-ish for such an ordinary problem. Is there a better way to go about this or any known good-practice?
docker docker-compose composer-php dockerfile
Personally I prefer to just execute the composer command either directly on the host or through aexec
orrun
via docker. And only do that when it is actually required.
– Jite
Jan 1 at 20:11
I also did that, but then you are required to pollute your host system with all the toolkits and whatnot. Say you develop for different versions, it's gonna be utter chaos. And if you exec it, you have to do it every single time you rebuild a container, and that would also limit automation possibilities for testing... :/
– Kana
Jan 1 at 20:17
add a comment |
I'm looking for a better solution to install composer or npm packages inside of host volumes mounted via docker-compose.
In my docker-compose.yml, I have:
volumes:
- ./app:/var/www/app
...
And in my dockerfile, I would want to use this volume to install stuff:
VOLUME ["/var/www/app"]
RUN composer install -d /var/www/app
But as I understand it, the volumes mounted in docker-compose are not yet available when the container is being built from the dockerfile.
So my next attempt was doing it when the container is started:
CMD bash -c "composer install -d /var/www/app && /usr/sbin/apache2ctl -DFOREGROUND"
That worked at least, but it called the composer install everytime the container was run which is redundant.
So my current idea is to use a dedicated composer image, mounting composer.json into it to install, then copy the finished vendor from the composer container somewhere into the container and link where it is needed. Like this:
FROM composer as composer
COPY ./app/composer.json /app
COPY ./app/composer.lock /app
RUN composer install --ignore-platform-reqs --no-scripts
FROM library/ubuntu:jessie
# ... do other stuff with the main image ...
COPY --from=composer /app/vendor /var/www/composer/vendor
CMD bash -c "ln -s /var/www/composer/vendor /var/www/app/vendor && /usr/sbin/apache2ctl -DFOREGROUND"
But it still feels kinda workaround-ish for such an ordinary problem. Is there a better way to go about this or any known good-practice?
docker docker-compose composer-php dockerfile
I'm looking for a better solution to install composer or npm packages inside of host volumes mounted via docker-compose.
In my docker-compose.yml, I have:
volumes:
- ./app:/var/www/app
...
And in my dockerfile, I would want to use this volume to install stuff:
VOLUME ["/var/www/app"]
RUN composer install -d /var/www/app
But as I understand it, the volumes mounted in docker-compose are not yet available when the container is being built from the dockerfile.
So my next attempt was doing it when the container is started:
CMD bash -c "composer install -d /var/www/app && /usr/sbin/apache2ctl -DFOREGROUND"
That worked at least, but it called the composer install everytime the container was run which is redundant.
So my current idea is to use a dedicated composer image, mounting composer.json into it to install, then copy the finished vendor from the composer container somewhere into the container and link where it is needed. Like this:
FROM composer as composer
COPY ./app/composer.json /app
COPY ./app/composer.lock /app
RUN composer install --ignore-platform-reqs --no-scripts
FROM library/ubuntu:jessie
# ... do other stuff with the main image ...
COPY --from=composer /app/vendor /var/www/composer/vendor
CMD bash -c "ln -s /var/www/composer/vendor /var/www/app/vendor && /usr/sbin/apache2ctl -DFOREGROUND"
But it still feels kinda workaround-ish for such an ordinary problem. Is there a better way to go about this or any known good-practice?
docker docker-compose composer-php dockerfile
docker docker-compose composer-php dockerfile
edited Jan 1 at 20:14
Kana
asked Jan 1 at 20:05
KanaKana
727
727
Personally I prefer to just execute the composer command either directly on the host or through aexec
orrun
via docker. And only do that when it is actually required.
– Jite
Jan 1 at 20:11
I also did that, but then you are required to pollute your host system with all the toolkits and whatnot. Say you develop for different versions, it's gonna be utter chaos. And if you exec it, you have to do it every single time you rebuild a container, and that would also limit automation possibilities for testing... :/
– Kana
Jan 1 at 20:17
add a comment |
Personally I prefer to just execute the composer command either directly on the host or through aexec
orrun
via docker. And only do that when it is actually required.
– Jite
Jan 1 at 20:11
I also did that, but then you are required to pollute your host system with all the toolkits and whatnot. Say you develop for different versions, it's gonna be utter chaos. And if you exec it, you have to do it every single time you rebuild a container, and that would also limit automation possibilities for testing... :/
– Kana
Jan 1 at 20:17
Personally I prefer to just execute the composer command either directly on the host or through a
exec
or run
via docker. And only do that when it is actually required.– Jite
Jan 1 at 20:11
Personally I prefer to just execute the composer command either directly on the host or through a
exec
or run
via docker. And only do that when it is actually required.– Jite
Jan 1 at 20:11
I also did that, but then you are required to pollute your host system with all the toolkits and whatnot. Say you develop for different versions, it's gonna be utter chaos. And if you exec it, you have to do it every single time you rebuild a container, and that would also limit automation possibilities for testing... :/
– Kana
Jan 1 at 20:17
I also did that, but then you are required to pollute your host system with all the toolkits and whatnot. Say you develop for different versions, it's gonna be utter chaos. And if you exec it, you have to do it every single time you rebuild a container, and that would also limit automation possibilities for testing... :/
– Kana
Jan 1 at 20:17
add a comment |
1 Answer
1
active
oldest
votes
What I consider best practice, and what I do professionally, is to not use volumes at all for this case. My Dockerfile COPY
s in the application code at build time. I have a working host development setup (the only real host dependency is Node itself; everything else is in the node_modules
directory) and so if I have an issue I can reproduce, debug, and write a test for it in my local environment. Only when that works do I go back to Docker.
FROM ???
WORKDIR /var/www/app
COPY app/composer.json app/composer.lock ./
RUN composer install --ignore-platform-reqs --no-scripts
COPY app/ ./
...
CMD ["apache2ctl", "-DFOREGROUND"]
Otherwise, there's a couple of things about Docker volumes to remember here:
Everything in the Dockerfile happens before any volumes or environment variables in the
docker-compose.yml
file are even considered. If your goal is to populate a volume, you can't do it in the Dockerfile (and this is just awkward in Docker in general; use native host tools instead).If you mount a volume into a container directory, it totally hides what's there already. I see a lot of questions with Dockerfiles that do work only inside a container-local
/app
directory, then bind-mount the local source tree over that; that basically makes the Dockerfile a no-op.If you have a
VOLUME
directive in your Dockerfile, you can't make any changes to that directory in the image any more. (In your question, runningcomposer
after theVOLUME
directive will silently have no effect.)You can mount a volume into any directory in a container regardless of whether or not it was declared as a
VOLUME
. I'd recommend never declaringVOLUME
s in Dockerfiles, and especially not for directories that contain code (you want these to be updated with new image code when a container gets deleted and recreated).
Thank you for yor detailed answer. So you're developing on your host and -at some point- you build your container with that fixed state. My "goal" is to isolate host tools regarding requirements to run my applications away from my host into the containers while still being able to make extensive code changes for ongoing development.
– Kana
Jan 1 at 20:43
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53998567%2faccessing-mounted-volumes-from-docker-for-composer-npm-installs-on-build-time%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
What I consider best practice, and what I do professionally, is to not use volumes at all for this case. My Dockerfile COPY
s in the application code at build time. I have a working host development setup (the only real host dependency is Node itself; everything else is in the node_modules
directory) and so if I have an issue I can reproduce, debug, and write a test for it in my local environment. Only when that works do I go back to Docker.
FROM ???
WORKDIR /var/www/app
COPY app/composer.json app/composer.lock ./
RUN composer install --ignore-platform-reqs --no-scripts
COPY app/ ./
...
CMD ["apache2ctl", "-DFOREGROUND"]
Otherwise, there's a couple of things about Docker volumes to remember here:
Everything in the Dockerfile happens before any volumes or environment variables in the
docker-compose.yml
file are even considered. If your goal is to populate a volume, you can't do it in the Dockerfile (and this is just awkward in Docker in general; use native host tools instead).If you mount a volume into a container directory, it totally hides what's there already. I see a lot of questions with Dockerfiles that do work only inside a container-local
/app
directory, then bind-mount the local source tree over that; that basically makes the Dockerfile a no-op.If you have a
VOLUME
directive in your Dockerfile, you can't make any changes to that directory in the image any more. (In your question, runningcomposer
after theVOLUME
directive will silently have no effect.)You can mount a volume into any directory in a container regardless of whether or not it was declared as a
VOLUME
. I'd recommend never declaringVOLUME
s in Dockerfiles, and especially not for directories that contain code (you want these to be updated with new image code when a container gets deleted and recreated).
Thank you for yor detailed answer. So you're developing on your host and -at some point- you build your container with that fixed state. My "goal" is to isolate host tools regarding requirements to run my applications away from my host into the containers while still being able to make extensive code changes for ongoing development.
– Kana
Jan 1 at 20:43
add a comment |
What I consider best practice, and what I do professionally, is to not use volumes at all for this case. My Dockerfile COPY
s in the application code at build time. I have a working host development setup (the only real host dependency is Node itself; everything else is in the node_modules
directory) and so if I have an issue I can reproduce, debug, and write a test for it in my local environment. Only when that works do I go back to Docker.
FROM ???
WORKDIR /var/www/app
COPY app/composer.json app/composer.lock ./
RUN composer install --ignore-platform-reqs --no-scripts
COPY app/ ./
...
CMD ["apache2ctl", "-DFOREGROUND"]
Otherwise, there's a couple of things about Docker volumes to remember here:
Everything in the Dockerfile happens before any volumes or environment variables in the
docker-compose.yml
file are even considered. If your goal is to populate a volume, you can't do it in the Dockerfile (and this is just awkward in Docker in general; use native host tools instead).If you mount a volume into a container directory, it totally hides what's there already. I see a lot of questions with Dockerfiles that do work only inside a container-local
/app
directory, then bind-mount the local source tree over that; that basically makes the Dockerfile a no-op.If you have a
VOLUME
directive in your Dockerfile, you can't make any changes to that directory in the image any more. (In your question, runningcomposer
after theVOLUME
directive will silently have no effect.)You can mount a volume into any directory in a container regardless of whether or not it was declared as a
VOLUME
. I'd recommend never declaringVOLUME
s in Dockerfiles, and especially not for directories that contain code (you want these to be updated with new image code when a container gets deleted and recreated).
Thank you for yor detailed answer. So you're developing on your host and -at some point- you build your container with that fixed state. My "goal" is to isolate host tools regarding requirements to run my applications away from my host into the containers while still being able to make extensive code changes for ongoing development.
– Kana
Jan 1 at 20:43
add a comment |
What I consider best practice, and what I do professionally, is to not use volumes at all for this case. My Dockerfile COPY
s in the application code at build time. I have a working host development setup (the only real host dependency is Node itself; everything else is in the node_modules
directory) and so if I have an issue I can reproduce, debug, and write a test for it in my local environment. Only when that works do I go back to Docker.
FROM ???
WORKDIR /var/www/app
COPY app/composer.json app/composer.lock ./
RUN composer install --ignore-platform-reqs --no-scripts
COPY app/ ./
...
CMD ["apache2ctl", "-DFOREGROUND"]
Otherwise, there's a couple of things about Docker volumes to remember here:
Everything in the Dockerfile happens before any volumes or environment variables in the
docker-compose.yml
file are even considered. If your goal is to populate a volume, you can't do it in the Dockerfile (and this is just awkward in Docker in general; use native host tools instead).If you mount a volume into a container directory, it totally hides what's there already. I see a lot of questions with Dockerfiles that do work only inside a container-local
/app
directory, then bind-mount the local source tree over that; that basically makes the Dockerfile a no-op.If you have a
VOLUME
directive in your Dockerfile, you can't make any changes to that directory in the image any more. (In your question, runningcomposer
after theVOLUME
directive will silently have no effect.)You can mount a volume into any directory in a container regardless of whether or not it was declared as a
VOLUME
. I'd recommend never declaringVOLUME
s in Dockerfiles, and especially not for directories that contain code (you want these to be updated with new image code when a container gets deleted and recreated).
What I consider best practice, and what I do professionally, is to not use volumes at all for this case. My Dockerfile COPY
s in the application code at build time. I have a working host development setup (the only real host dependency is Node itself; everything else is in the node_modules
directory) and so if I have an issue I can reproduce, debug, and write a test for it in my local environment. Only when that works do I go back to Docker.
FROM ???
WORKDIR /var/www/app
COPY app/composer.json app/composer.lock ./
RUN composer install --ignore-platform-reqs --no-scripts
COPY app/ ./
...
CMD ["apache2ctl", "-DFOREGROUND"]
Otherwise, there's a couple of things about Docker volumes to remember here:
Everything in the Dockerfile happens before any volumes or environment variables in the
docker-compose.yml
file are even considered. If your goal is to populate a volume, you can't do it in the Dockerfile (and this is just awkward in Docker in general; use native host tools instead).If you mount a volume into a container directory, it totally hides what's there already. I see a lot of questions with Dockerfiles that do work only inside a container-local
/app
directory, then bind-mount the local source tree over that; that basically makes the Dockerfile a no-op.If you have a
VOLUME
directive in your Dockerfile, you can't make any changes to that directory in the image any more. (In your question, runningcomposer
after theVOLUME
directive will silently have no effect.)You can mount a volume into any directory in a container regardless of whether or not it was declared as a
VOLUME
. I'd recommend never declaringVOLUME
s in Dockerfiles, and especially not for directories that contain code (you want these to be updated with new image code when a container gets deleted and recreated).
answered Jan 1 at 20:23
David MazeDavid Maze
15.2k31429
15.2k31429
Thank you for yor detailed answer. So you're developing on your host and -at some point- you build your container with that fixed state. My "goal" is to isolate host tools regarding requirements to run my applications away from my host into the containers while still being able to make extensive code changes for ongoing development.
– Kana
Jan 1 at 20:43
add a comment |
Thank you for yor detailed answer. So you're developing on your host and -at some point- you build your container with that fixed state. My "goal" is to isolate host tools regarding requirements to run my applications away from my host into the containers while still being able to make extensive code changes for ongoing development.
– Kana
Jan 1 at 20:43
Thank you for yor detailed answer. So you're developing on your host and -at some point- you build your container with that fixed state. My "goal" is to isolate host tools regarding requirements to run my applications away from my host into the containers while still being able to make extensive code changes for ongoing development.
– Kana
Jan 1 at 20:43
Thank you for yor detailed answer. So you're developing on your host and -at some point- you build your container with that fixed state. My "goal" is to isolate host tools regarding requirements to run my applications away from my host into the containers while still being able to make extensive code changes for ongoing development.
– Kana
Jan 1 at 20:43
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53998567%2faccessing-mounted-volumes-from-docker-for-composer-npm-installs-on-build-time%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Personally I prefer to just execute the composer command either directly on the host or through a
exec
orrun
via docker. And only do that when it is actually required.– Jite
Jan 1 at 20:11
I also did that, but then you are required to pollute your host system with all the toolkits and whatnot. Say you develop for different versions, it's gonna be utter chaos. And if you exec it, you have to do it every single time you rebuild a container, and that would also limit automation possibilities for testing... :/
– Kana
Jan 1 at 20:17