Docker multi-stage build not copying between stages












0















I'm trying to create a multi-stage build where the first stage does a yarn install for the theme and the second stage sets up the PHP environment for Drupal.



When I look at the output it looks like yarn install is being run but the COPY command near the bottom doesn't copy it across to the PHP image. If I'm right when this works the node_modules directory should be created on my local machine?



docker-compose.yml:



version: '3.7'
services:
web:
build:
context: .
dockerfile: Dockerfile
volumes:
- ./:/var/www/html:cached
env_file:
- ./local-development.env
ports:
- "8888:80"
db:
image: mysql:5.7
env_file:
- ./local-development.env
ports:
- "3306:3306"


Dockerfile:



FROM node:latest as yarn-install
WORKDIR /app
COPY ./web/themes/material_admin_mine ./
RUN yarn install --verbose --force;

# from https://www.drupal.org/docs/8/system-requirements/drupal-8-php-requirements
FROM php:7.2-apache

# Install & setup Xdebug
RUN yes | pecl install xdebug
&& echo "zend_extension=$(find /usr/local/lib/php/extensions/ -name xdebug.so)" > /usr/local/etc/php/conf.d/xdebug.ini
&& echo "xdebug.remote_enable=1" >> /usr/local/etc/php/conf.d/xdebug.ini
&& echo 'xdebug.remote_connect_back=0' >> /usr/local/etc/php/conf.d/xdebug.ini
&& echo 'xdebug.remote_host=docker.for.mac.localhost' >> /usr/local/etc/php/conf.d/xdebug.ini
&& echo 'xdebug.remote_port=9000' >> /usr/local/etc/php/conf.d/xdebug.ini
&& echo 'xdebug.remote_handler=dbgp' >> /usr/local/etc/php/conf.d/xdebug.ini
&& echo 'xdebug.remote_mode=req' >> /usr/local/etc/php/conf.d/xdebug.ini
&& echo 'xdebug.remote_autostart=1' >> /usr/local/etc/php/conf.d/xdebug.ini
&& echo 'xdebug.idekey=PHPSTORM' >> /usr/local/etc/php/conf.d/xdebug.ini

# Install git & mysql-client for running Drush
RUN apt update;
apt install -y
git
mysql-client

# install the PHP extensions we need
RUN set -ex;

if command -v a2enmod; then
a2enmod rewrite;
fi;

savedAptMark="$(apt-mark showmanual)";

apt-get update;
apt-get install -y --no-install-recommends
libjpeg-dev
libpng-dev
libpq-dev
unzip
git
;

curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer;

docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr;
docker-php-ext-install -j "$(nproc)"
gd
opcache
pdo_mysql
pdo_pgsql
zip
;

# reset apt-mark's "manual" list so that "purge --auto-remove" will remove all build dependencies
apt-mark auto '.*' > /dev/null;
apt-mark manual $savedAptMark;
ldd "$(php -r 'echo ini_get("extension_dir");')"/*.so
| awk '/=>/ { print $3 }'
| sort -u
| xargs -r dpkg-query -S
| cut -d: -f1
| sort -u
| xargs -rt apt-mark manual;

apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false;
rm -rf /var/lib/apt/lists/*

# set recommended PHP.ini settings
# see https://secure.php.net/manual/en/opcache.installation.php
RUN {
echo 'opcache.memory_consumption=128';
echo 'opcache.interned_strings_buffer=8';
echo 'opcache.max_accelerated_files=4000';
echo 'opcache.revalidate_freq=60';
echo 'opcache.fast_shutdown=1';
echo 'opcache.enable_cli=1';
} > /usr/local/etc/php/conf.d/opcache-recommended.ini

# Various packages required to run Gulp in the theme directory
# gnupg is require for nodejs
RUN apt update;
apt install gnupg -y;
apt install gnupg1 -y;
apt install gnupg2 -y;
cd ~;
curl -sL https://deb.nodesource.com/setup_10.x -o nodesource_setup.sh;
bash nodesource_setup.sh;
apt install nodejs -y;
npm install gulp-cli -g -y;
curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - ;
echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list;
apt update && apt install yarn -y;

WORKDIR /var/www/html
COPY --from=0 /app ./web/themes/material_admin_mine









share|improve this question























  • COPY copies file from stage 0 to stage 1 (php image), not your host machine.

    – Siyu
    Jan 2 at 9:29











  • @Siyu Thanks but as there's a volume setup in the docker-compose shouldn't it sync those copied files to my local machine? Either way when I bash into the container it doesn't appear to have copied them their either.

    – Neil Nand
    Jan 2 at 10:55
















0















I'm trying to create a multi-stage build where the first stage does a yarn install for the theme and the second stage sets up the PHP environment for Drupal.



When I look at the output it looks like yarn install is being run but the COPY command near the bottom doesn't copy it across to the PHP image. If I'm right when this works the node_modules directory should be created on my local machine?



docker-compose.yml:



version: '3.7'
services:
web:
build:
context: .
dockerfile: Dockerfile
volumes:
- ./:/var/www/html:cached
env_file:
- ./local-development.env
ports:
- "8888:80"
db:
image: mysql:5.7
env_file:
- ./local-development.env
ports:
- "3306:3306"


Dockerfile:



FROM node:latest as yarn-install
WORKDIR /app
COPY ./web/themes/material_admin_mine ./
RUN yarn install --verbose --force;

# from https://www.drupal.org/docs/8/system-requirements/drupal-8-php-requirements
FROM php:7.2-apache

# Install & setup Xdebug
RUN yes | pecl install xdebug
&& echo "zend_extension=$(find /usr/local/lib/php/extensions/ -name xdebug.so)" > /usr/local/etc/php/conf.d/xdebug.ini
&& echo "xdebug.remote_enable=1" >> /usr/local/etc/php/conf.d/xdebug.ini
&& echo 'xdebug.remote_connect_back=0' >> /usr/local/etc/php/conf.d/xdebug.ini
&& echo 'xdebug.remote_host=docker.for.mac.localhost' >> /usr/local/etc/php/conf.d/xdebug.ini
&& echo 'xdebug.remote_port=9000' >> /usr/local/etc/php/conf.d/xdebug.ini
&& echo 'xdebug.remote_handler=dbgp' >> /usr/local/etc/php/conf.d/xdebug.ini
&& echo 'xdebug.remote_mode=req' >> /usr/local/etc/php/conf.d/xdebug.ini
&& echo 'xdebug.remote_autostart=1' >> /usr/local/etc/php/conf.d/xdebug.ini
&& echo 'xdebug.idekey=PHPSTORM' >> /usr/local/etc/php/conf.d/xdebug.ini

# Install git & mysql-client for running Drush
RUN apt update;
apt install -y
git
mysql-client

# install the PHP extensions we need
RUN set -ex;

if command -v a2enmod; then
a2enmod rewrite;
fi;

savedAptMark="$(apt-mark showmanual)";

apt-get update;
apt-get install -y --no-install-recommends
libjpeg-dev
libpng-dev
libpq-dev
unzip
git
;

curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer;

docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr;
docker-php-ext-install -j "$(nproc)"
gd
opcache
pdo_mysql
pdo_pgsql
zip
;

# reset apt-mark's "manual" list so that "purge --auto-remove" will remove all build dependencies
apt-mark auto '.*' > /dev/null;
apt-mark manual $savedAptMark;
ldd "$(php -r 'echo ini_get("extension_dir");')"/*.so
| awk '/=>/ { print $3 }'
| sort -u
| xargs -r dpkg-query -S
| cut -d: -f1
| sort -u
| xargs -rt apt-mark manual;

apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false;
rm -rf /var/lib/apt/lists/*

# set recommended PHP.ini settings
# see https://secure.php.net/manual/en/opcache.installation.php
RUN {
echo 'opcache.memory_consumption=128';
echo 'opcache.interned_strings_buffer=8';
echo 'opcache.max_accelerated_files=4000';
echo 'opcache.revalidate_freq=60';
echo 'opcache.fast_shutdown=1';
echo 'opcache.enable_cli=1';
} > /usr/local/etc/php/conf.d/opcache-recommended.ini

# Various packages required to run Gulp in the theme directory
# gnupg is require for nodejs
RUN apt update;
apt install gnupg -y;
apt install gnupg1 -y;
apt install gnupg2 -y;
cd ~;
curl -sL https://deb.nodesource.com/setup_10.x -o nodesource_setup.sh;
bash nodesource_setup.sh;
apt install nodejs -y;
npm install gulp-cli -g -y;
curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - ;
echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list;
apt update && apt install yarn -y;

WORKDIR /var/www/html
COPY --from=0 /app ./web/themes/material_admin_mine









share|improve this question























  • COPY copies file from stage 0 to stage 1 (php image), not your host machine.

    – Siyu
    Jan 2 at 9:29











  • @Siyu Thanks but as there's a volume setup in the docker-compose shouldn't it sync those copied files to my local machine? Either way when I bash into the container it doesn't appear to have copied them their either.

    – Neil Nand
    Jan 2 at 10:55














0












0








0


1






I'm trying to create a multi-stage build where the first stage does a yarn install for the theme and the second stage sets up the PHP environment for Drupal.



When I look at the output it looks like yarn install is being run but the COPY command near the bottom doesn't copy it across to the PHP image. If I'm right when this works the node_modules directory should be created on my local machine?



docker-compose.yml:



version: '3.7'
services:
web:
build:
context: .
dockerfile: Dockerfile
volumes:
- ./:/var/www/html:cached
env_file:
- ./local-development.env
ports:
- "8888:80"
db:
image: mysql:5.7
env_file:
- ./local-development.env
ports:
- "3306:3306"


Dockerfile:



FROM node:latest as yarn-install
WORKDIR /app
COPY ./web/themes/material_admin_mine ./
RUN yarn install --verbose --force;

# from https://www.drupal.org/docs/8/system-requirements/drupal-8-php-requirements
FROM php:7.2-apache

# Install & setup Xdebug
RUN yes | pecl install xdebug
&& echo "zend_extension=$(find /usr/local/lib/php/extensions/ -name xdebug.so)" > /usr/local/etc/php/conf.d/xdebug.ini
&& echo "xdebug.remote_enable=1" >> /usr/local/etc/php/conf.d/xdebug.ini
&& echo 'xdebug.remote_connect_back=0' >> /usr/local/etc/php/conf.d/xdebug.ini
&& echo 'xdebug.remote_host=docker.for.mac.localhost' >> /usr/local/etc/php/conf.d/xdebug.ini
&& echo 'xdebug.remote_port=9000' >> /usr/local/etc/php/conf.d/xdebug.ini
&& echo 'xdebug.remote_handler=dbgp' >> /usr/local/etc/php/conf.d/xdebug.ini
&& echo 'xdebug.remote_mode=req' >> /usr/local/etc/php/conf.d/xdebug.ini
&& echo 'xdebug.remote_autostart=1' >> /usr/local/etc/php/conf.d/xdebug.ini
&& echo 'xdebug.idekey=PHPSTORM' >> /usr/local/etc/php/conf.d/xdebug.ini

# Install git & mysql-client for running Drush
RUN apt update;
apt install -y
git
mysql-client

# install the PHP extensions we need
RUN set -ex;

if command -v a2enmod; then
a2enmod rewrite;
fi;

savedAptMark="$(apt-mark showmanual)";

apt-get update;
apt-get install -y --no-install-recommends
libjpeg-dev
libpng-dev
libpq-dev
unzip
git
;

curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer;

docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr;
docker-php-ext-install -j "$(nproc)"
gd
opcache
pdo_mysql
pdo_pgsql
zip
;

# reset apt-mark's "manual" list so that "purge --auto-remove" will remove all build dependencies
apt-mark auto '.*' > /dev/null;
apt-mark manual $savedAptMark;
ldd "$(php -r 'echo ini_get("extension_dir");')"/*.so
| awk '/=>/ { print $3 }'
| sort -u
| xargs -r dpkg-query -S
| cut -d: -f1
| sort -u
| xargs -rt apt-mark manual;

apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false;
rm -rf /var/lib/apt/lists/*

# set recommended PHP.ini settings
# see https://secure.php.net/manual/en/opcache.installation.php
RUN {
echo 'opcache.memory_consumption=128';
echo 'opcache.interned_strings_buffer=8';
echo 'opcache.max_accelerated_files=4000';
echo 'opcache.revalidate_freq=60';
echo 'opcache.fast_shutdown=1';
echo 'opcache.enable_cli=1';
} > /usr/local/etc/php/conf.d/opcache-recommended.ini

# Various packages required to run Gulp in the theme directory
# gnupg is require for nodejs
RUN apt update;
apt install gnupg -y;
apt install gnupg1 -y;
apt install gnupg2 -y;
cd ~;
curl -sL https://deb.nodesource.com/setup_10.x -o nodesource_setup.sh;
bash nodesource_setup.sh;
apt install nodejs -y;
npm install gulp-cli -g -y;
curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - ;
echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list;
apt update && apt install yarn -y;

WORKDIR /var/www/html
COPY --from=0 /app ./web/themes/material_admin_mine









share|improve this question














I'm trying to create a multi-stage build where the first stage does a yarn install for the theme and the second stage sets up the PHP environment for Drupal.



When I look at the output it looks like yarn install is being run but the COPY command near the bottom doesn't copy it across to the PHP image. If I'm right when this works the node_modules directory should be created on my local machine?



docker-compose.yml:



version: '3.7'
services:
web:
build:
context: .
dockerfile: Dockerfile
volumes:
- ./:/var/www/html:cached
env_file:
- ./local-development.env
ports:
- "8888:80"
db:
image: mysql:5.7
env_file:
- ./local-development.env
ports:
- "3306:3306"


Dockerfile:



FROM node:latest as yarn-install
WORKDIR /app
COPY ./web/themes/material_admin_mine ./
RUN yarn install --verbose --force;

# from https://www.drupal.org/docs/8/system-requirements/drupal-8-php-requirements
FROM php:7.2-apache

# Install & setup Xdebug
RUN yes | pecl install xdebug
&& echo "zend_extension=$(find /usr/local/lib/php/extensions/ -name xdebug.so)" > /usr/local/etc/php/conf.d/xdebug.ini
&& echo "xdebug.remote_enable=1" >> /usr/local/etc/php/conf.d/xdebug.ini
&& echo 'xdebug.remote_connect_back=0' >> /usr/local/etc/php/conf.d/xdebug.ini
&& echo 'xdebug.remote_host=docker.for.mac.localhost' >> /usr/local/etc/php/conf.d/xdebug.ini
&& echo 'xdebug.remote_port=9000' >> /usr/local/etc/php/conf.d/xdebug.ini
&& echo 'xdebug.remote_handler=dbgp' >> /usr/local/etc/php/conf.d/xdebug.ini
&& echo 'xdebug.remote_mode=req' >> /usr/local/etc/php/conf.d/xdebug.ini
&& echo 'xdebug.remote_autostart=1' >> /usr/local/etc/php/conf.d/xdebug.ini
&& echo 'xdebug.idekey=PHPSTORM' >> /usr/local/etc/php/conf.d/xdebug.ini

# Install git & mysql-client for running Drush
RUN apt update;
apt install -y
git
mysql-client

# install the PHP extensions we need
RUN set -ex;

if command -v a2enmod; then
a2enmod rewrite;
fi;

savedAptMark="$(apt-mark showmanual)";

apt-get update;
apt-get install -y --no-install-recommends
libjpeg-dev
libpng-dev
libpq-dev
unzip
git
;

curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer;

docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr;
docker-php-ext-install -j "$(nproc)"
gd
opcache
pdo_mysql
pdo_pgsql
zip
;

# reset apt-mark's "manual" list so that "purge --auto-remove" will remove all build dependencies
apt-mark auto '.*' > /dev/null;
apt-mark manual $savedAptMark;
ldd "$(php -r 'echo ini_get("extension_dir");')"/*.so
| awk '/=>/ { print $3 }'
| sort -u
| xargs -r dpkg-query -S
| cut -d: -f1
| sort -u
| xargs -rt apt-mark manual;

apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false;
rm -rf /var/lib/apt/lists/*

# set recommended PHP.ini settings
# see https://secure.php.net/manual/en/opcache.installation.php
RUN {
echo 'opcache.memory_consumption=128';
echo 'opcache.interned_strings_buffer=8';
echo 'opcache.max_accelerated_files=4000';
echo 'opcache.revalidate_freq=60';
echo 'opcache.fast_shutdown=1';
echo 'opcache.enable_cli=1';
} > /usr/local/etc/php/conf.d/opcache-recommended.ini

# Various packages required to run Gulp in the theme directory
# gnupg is require for nodejs
RUN apt update;
apt install gnupg -y;
apt install gnupg1 -y;
apt install gnupg2 -y;
cd ~;
curl -sL https://deb.nodesource.com/setup_10.x -o nodesource_setup.sh;
bash nodesource_setup.sh;
apt install nodejs -y;
npm install gulp-cli -g -y;
curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - ;
echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list;
apt update && apt install yarn -y;

WORKDIR /var/www/html
COPY --from=0 /app ./web/themes/material_admin_mine






docker docker-compose dockerfile docker-multi-stage-build






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Jan 2 at 5:45









Neil NandNeil Nand

194217




194217













  • COPY copies file from stage 0 to stage 1 (php image), not your host machine.

    – Siyu
    Jan 2 at 9:29











  • @Siyu Thanks but as there's a volume setup in the docker-compose shouldn't it sync those copied files to my local machine? Either way when I bash into the container it doesn't appear to have copied them their either.

    – Neil Nand
    Jan 2 at 10:55



















  • COPY copies file from stage 0 to stage 1 (php image), not your host machine.

    – Siyu
    Jan 2 at 9:29











  • @Siyu Thanks but as there's a volume setup in the docker-compose shouldn't it sync those copied files to my local machine? Either way when I bash into the container it doesn't appear to have copied them their either.

    – Neil Nand
    Jan 2 at 10:55

















COPY copies file from stage 0 to stage 1 (php image), not your host machine.

– Siyu
Jan 2 at 9:29





COPY copies file from stage 0 to stage 1 (php image), not your host machine.

– Siyu
Jan 2 at 9:29













@Siyu Thanks but as there's a volume setup in the docker-compose shouldn't it sync those copied files to my local machine? Either way when I bash into the container it doesn't appear to have copied them their either.

– Neil Nand
Jan 2 at 10:55





@Siyu Thanks but as there's a volume setup in the docker-compose shouldn't it sync those copied files to my local machine? Either way when I bash into the container it doesn't appear to have copied them their either.

– Neil Nand
Jan 2 at 10:55












1 Answer
1






active

oldest

votes


















1














When your Dockerfile ends with:



WORKDIR /var/www/html
COPY --from=0 /app ./web/themes/material_admin_mine


That should in fact copy the data from the first build stage to the final image. But then when you launch the container with



volumes:
- ./:/var/www/html:cached


everything in the /var/www/html directory tree, including that final COPY step, is hidden and replaced with what's in the current directory on the host. If you think of this like a copy, it's a one-way copy into the container; later changes will get copied back out to the host, but there's nothing that synchronizes what's in the image with what you previously had in the directory at startup time.



A Dockerfile intrinsically can't affect host filesystem content. In your case it sounds like the host content is secondary to your application proper. Given what's going into the first stage, I'd just run the yarn install step on the host and be done with it (you probably already have Node and Yarn available even). Otherwise you'd need a more selective volumes: section that carefully tried to avoid overwriting that one directory; you might be able to mount something like ./web/src:/var/www/html/web/src to only include your application code and avoid hiding the .../web/themes tree.






share|improve this answer
























  • Thanks for your reply, that explains what's happening and why my approach doesn't do what I am trying to achieve. My idea was to run all setup steps in the container build so a developer just builds the container and everything they need is setup for them automatically but it looks like that's not an easy, or potentially even correct, approach to take.

    – Neil Nand
    Jan 3 at 11:00











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54001770%2fdocker-multi-stage-build-not-copying-between-stages%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1














When your Dockerfile ends with:



WORKDIR /var/www/html
COPY --from=0 /app ./web/themes/material_admin_mine


That should in fact copy the data from the first build stage to the final image. But then when you launch the container with



volumes:
- ./:/var/www/html:cached


everything in the /var/www/html directory tree, including that final COPY step, is hidden and replaced with what's in the current directory on the host. If you think of this like a copy, it's a one-way copy into the container; later changes will get copied back out to the host, but there's nothing that synchronizes what's in the image with what you previously had in the directory at startup time.



A Dockerfile intrinsically can't affect host filesystem content. In your case it sounds like the host content is secondary to your application proper. Given what's going into the first stage, I'd just run the yarn install step on the host and be done with it (you probably already have Node and Yarn available even). Otherwise you'd need a more selective volumes: section that carefully tried to avoid overwriting that one directory; you might be able to mount something like ./web/src:/var/www/html/web/src to only include your application code and avoid hiding the .../web/themes tree.






share|improve this answer
























  • Thanks for your reply, that explains what's happening and why my approach doesn't do what I am trying to achieve. My idea was to run all setup steps in the container build so a developer just builds the container and everything they need is setup for them automatically but it looks like that's not an easy, or potentially even correct, approach to take.

    – Neil Nand
    Jan 3 at 11:00
















1














When your Dockerfile ends with:



WORKDIR /var/www/html
COPY --from=0 /app ./web/themes/material_admin_mine


That should in fact copy the data from the first build stage to the final image. But then when you launch the container with



volumes:
- ./:/var/www/html:cached


everything in the /var/www/html directory tree, including that final COPY step, is hidden and replaced with what's in the current directory on the host. If you think of this like a copy, it's a one-way copy into the container; later changes will get copied back out to the host, but there's nothing that synchronizes what's in the image with what you previously had in the directory at startup time.



A Dockerfile intrinsically can't affect host filesystem content. In your case it sounds like the host content is secondary to your application proper. Given what's going into the first stage, I'd just run the yarn install step on the host and be done with it (you probably already have Node and Yarn available even). Otherwise you'd need a more selective volumes: section that carefully tried to avoid overwriting that one directory; you might be able to mount something like ./web/src:/var/www/html/web/src to only include your application code and avoid hiding the .../web/themes tree.






share|improve this answer
























  • Thanks for your reply, that explains what's happening and why my approach doesn't do what I am trying to achieve. My idea was to run all setup steps in the container build so a developer just builds the container and everything they need is setup for them automatically but it looks like that's not an easy, or potentially even correct, approach to take.

    – Neil Nand
    Jan 3 at 11:00














1












1








1







When your Dockerfile ends with:



WORKDIR /var/www/html
COPY --from=0 /app ./web/themes/material_admin_mine


That should in fact copy the data from the first build stage to the final image. But then when you launch the container with



volumes:
- ./:/var/www/html:cached


everything in the /var/www/html directory tree, including that final COPY step, is hidden and replaced with what's in the current directory on the host. If you think of this like a copy, it's a one-way copy into the container; later changes will get copied back out to the host, but there's nothing that synchronizes what's in the image with what you previously had in the directory at startup time.



A Dockerfile intrinsically can't affect host filesystem content. In your case it sounds like the host content is secondary to your application proper. Given what's going into the first stage, I'd just run the yarn install step on the host and be done with it (you probably already have Node and Yarn available even). Otherwise you'd need a more selective volumes: section that carefully tried to avoid overwriting that one directory; you might be able to mount something like ./web/src:/var/www/html/web/src to only include your application code and avoid hiding the .../web/themes tree.






share|improve this answer













When your Dockerfile ends with:



WORKDIR /var/www/html
COPY --from=0 /app ./web/themes/material_admin_mine


That should in fact copy the data from the first build stage to the final image. But then when you launch the container with



volumes:
- ./:/var/www/html:cached


everything in the /var/www/html directory tree, including that final COPY step, is hidden and replaced with what's in the current directory on the host. If you think of this like a copy, it's a one-way copy into the container; later changes will get copied back out to the host, but there's nothing that synchronizes what's in the image with what you previously had in the directory at startup time.



A Dockerfile intrinsically can't affect host filesystem content. In your case it sounds like the host content is secondary to your application proper. Given what's going into the first stage, I'd just run the yarn install step on the host and be done with it (you probably already have Node and Yarn available even). Otherwise you'd need a more selective volumes: section that carefully tried to avoid overwriting that one directory; you might be able to mount something like ./web/src:/var/www/html/web/src to only include your application code and avoid hiding the .../web/themes tree.







share|improve this answer












share|improve this answer



share|improve this answer










answered Jan 2 at 11:00









David MazeDavid Maze

15.4k31531




15.4k31531













  • Thanks for your reply, that explains what's happening and why my approach doesn't do what I am trying to achieve. My idea was to run all setup steps in the container build so a developer just builds the container and everything they need is setup for them automatically but it looks like that's not an easy, or potentially even correct, approach to take.

    – Neil Nand
    Jan 3 at 11:00



















  • Thanks for your reply, that explains what's happening and why my approach doesn't do what I am trying to achieve. My idea was to run all setup steps in the container build so a developer just builds the container and everything they need is setup for them automatically but it looks like that's not an easy, or potentially even correct, approach to take.

    – Neil Nand
    Jan 3 at 11:00

















Thanks for your reply, that explains what's happening and why my approach doesn't do what I am trying to achieve. My idea was to run all setup steps in the container build so a developer just builds the container and everything they need is setup for them automatically but it looks like that's not an easy, or potentially even correct, approach to take.

– Neil Nand
Jan 3 at 11:00





Thanks for your reply, that explains what's happening and why my approach doesn't do what I am trying to achieve. My idea was to run all setup steps in the container build so a developer just builds the container and everything they need is setup for them automatically but it looks like that's not an easy, or potentially even correct, approach to take.

– Neil Nand
Jan 3 at 11:00




















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54001770%2fdocker-multi-stage-build-not-copying-between-stages%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

MongoDB - Not Authorized To Execute Command

How to fix TextFormField cause rebuild widget in Flutter

in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith