MoPaas的WSGI配置

CloudFoudry实例运行在KVM虚拟机上,所以底层应该是某种IaaS(已经确认跑在UCloud的Iaas上)model name : QEMU Virtual CPU version (cpu64-rhel6 红帽企业版)

KVM虚拟机里运行的是 Ubuntu 10.04.4 LTS, 这个倒是跟京东云一样,不过jcloud的CloudFoudry跑在物理机器上

~/gunicorn.config

import osbind = "0.0.0.0:%s" % os.environ['VCAP_APP_PORT']loglevel = "debug"

文件系统信息

Filesystem           1K-blocks      Used Available Use% Mounted on/dev/vda1             20640380  15434148   4157760  79% /none                   2024488       164   2024324   1% /devnone                   2028936         0   2028936   0% /dev/shmnone                   2028936        60   2028876   1% /var/runnone                   2028936         0   2028936   0% /var/locknone                   2028936         0   2028936   0% /lib/init/rw10.4.8.152:/var/vcap/store/fss_backend1                      41285120   1086976  38100992   3% /var/vcap/store/fss_backend1/dev/vda1 on / type ext4 (rw,errors=remount-ro)proc on /proc type proc (rw,noexec,nosuid,nodev)none on /sys type sysfs (rw,noexec,nosuid,nodev)none on /sys/fs/fuse/connections type fusectl (rw)none on /sys/kernel/debug type debugfs (rw)none on /sys/kernel/security type securityfs (rw)none on /dev type devtmpfs (rw,mode=0755)none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)none on /dev/shm type tmpfs (rw,nosuid,nodev)none on /var/run type tmpfs (rw,nosuid,mode=0755)none on /var/lock type tmpfs (rw,noexec,nosuid,nodev)none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)10.4.8.152:/var/vcap/store/fss_backend1 on /var/vcap/store/fss_backend1 type nfs (rw,addr=10.4.8.152)

进程信息

大部分是PHP23010      665  0.0  0.0  35608   612 ?        S    Sep17   0:00 su -s /bin/bash vcap-user-1023010      668  0.0  0.0  10756   556 ?        S    Sep17   0:00 bash23010      672  0.0  0.0  10768   564 ?        S    Sep17   0:00 /bin/bash ./startup -p 5148023010      677  0.0  0.0  10760   560 ?        S    Sep17   0:00 bash ./start.sh23010      678  0.0  0.0 239352  3028 ?        S    Sep17   3:37 /usr/sbin/apache2 -d /var/vcap.local/dea/apps/asdasdasd-0-e8f178adbb112b2205a32af74e93bb6d/apache -f /var/vcap.local/dea/apps/asdasdasd-0-e8f178adbb112b2205a32af74e93bb6d/apache/apache2.conf -D FOREGROUND也有人用Java23709     9430  0.0  0.0  35608   772 ?        SN   Sep26   0:00 su -s /bin/bash vcap-user-70923709     9433  0.0  0.0  10752   512 ?        SN   Sep26   0:00 bash23709     9435  0.0  0.0  10756   524 ?        SN   Sep26   0:00 /bin/bash ./startup -p 3660823709     9439  0.0  3.6 1025780 147048 ?      SNl  Sep26  76:31 /usr/lib/jvm/java-6-openjdk/jre/bin/java -Djava.util.logging.config.file=/var/vcap.local/dea/apps/socialtest-0-3f8ca022f3042d54836a23e2dbb6d066/tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Xms512m -Xmx512m -Djava.io.tmpdir=/var/vcap.local/dea/apps/socialtest-0-3f8ca022f3042d54836a23e2dbb6d066/tmp -Djava.endorsed.dirs=/var/vcap.local/dea/apps/socialtest-0-3f8ca022f3042d54836a23e2dbb6d066/tomcat/endorsed -classpath /var/vcap.local/dea/apps/socialtest-0-3f8ca022f3042d54836a23e2dbb6d066/tomcat/bin/bootstrap.jar -Dcatalina.base=/var/vcap.local/dea/apps/socialtest-0-3f8ca022f3042d54836a23e2dbb6d066/tomcat -Dcatalina.home=/var/vcap.local/dea/apps/socialtest-0-3f8ca022f3042d54836a23e2dbb6d066/tomcat -Djava.io.tmpdir=/var/vcap.local/dea/apps/socialtest-0-3f8ca022f3042d54836a23e2dbb6d066/tomcat/temp org.apache.catalina.startup.Bootstrap startroot      9456  1.0  1.9 1477992 79532 ?       Sl   Oct22 653:11 java -cp /root/mopaasmgr//lib/jsch-0.1.26.jar:/root/mopaasmgr//lib/commons-beanutils-1.8.0.jar:/root/mopaasmgr//lib/commons-codec-1.4.jar:/root/mopaasmgr//lib/commons-collections-3.2.1.jar:/root/mopaasmgr//lib/commons-lang-2.4.jar:/root/mopaasmgr//lib/commons-logging-1.1.1.jar:/root/mopaasmgr//lib/ezmorph-1.0.6.jar:/root/mopaasmgr//lib/httpclient-4.1.2.jar:/root/mopaasmgr//lib/httpcore-4.1.2.jar:/root/mopaasmgr//lib/json-lib-2.3-jdk15.jar:/root/mopaasmgr//lib/log4j-1.2.17.jar:/root/mopaasmgr//lib/mgr-client-0.0.1.jar me.anchora.mopaasmgr.client.main.SendData host_desc=dea03 is_app_log=true /dev/null...23006    14741  0.0  0.0  35608  1156 ?        SN   Nov01   0:00 su -s /bin/bash vcap-user-623006    14743  0.0  0.0  10760  1328 ?        SN   Nov01   0:00 bash23006    14745  0.0  0.0  10764  1364 ?        SN   Nov01   0:00 /bin/bash ./startup -p 4680023006    14749  0.1 13.1 1566000 533408 ?      SNl  Nov01  61:44 /usr/lib/jvm/java-6-openjdk/jre/bin/java -Djava.util.logging.config.file=/var/vcap.local/dea/apps/mapreduce-0-8cdbde735e1138b41a2633659a18df05/tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Xms1024m -Xmx1024m -Djava.io.tmpdir=/var/vcap.local/dea/apps/mapreduce-0-8cdbde735e1138b41a2633659a18df05/tmp -Djava.endorsed.dirs=/var/vcap.local/dea/apps/mapreduce-0-8cdbde735e1138b41a2633659a18df05/tomcat/endorsed -classpath /var/vcap.local/dea/apps/mapreduce-0-8cdbde735e1138b41a2633659a18df05/tomcat/bin/bootstrap.jar -Dcatalina.base=/var/vcap.local/dea/apps/mapreduce-0-8cdbde735e1138b41a2633659a18df05/tomcat -Dcatalina.home=/var/vcap.local/dea/apps/mapreduce-0-8cdbde735e1138b41a2633659a18df05/tomcat -Djava.io.tmpdir=/var/vcap.local/dea/apps/mapreduce-0-8cdbde735e1138b41a2633659a18df05/tomcat/temp org.apache.catalina.startup.Bootstrap start.....root     24788  1.3  1.5 312688 61940 ?        Sl   Sep03 1758:44 ruby /root/cloudfoundry/vcap/dea/bin/dea -c /root/cloudfoundry/.deployments/devbox/config/dea.yml26170    24962  0.0  0.0  35608   612 ?        S    Sep02   0:00 su -s /bin/bash vcap-user-317026170    24966  0.0  0.0  10748   492 ?        S    Sep02   0:00 bash26170    24968  0.0  0.0  10744   496 ?        S    Sep02   0:00 /bin/bash ./startup -p 6317826170    24969  0.0  0.0 629876   696 ?        S    Sep02   0:11 /root/cloudfoundry/.deployments/devbox/deploy/nodes/node-0.4.12/bin/node autoconfig.js -p 63178

一台虚拟机(或者物理机)上会运行一个或多个DEA。一个DEA可以启动多个App(又称Droplet)。所以我们大家的App都会由DEA启动并管理dea(Droplet Execution Agent)是其应用运行的环境,一个DEA可以启动多个应用

看来MoPaas是使用dea初始版的dea基本不控制系统资源,然后基于cgroup搞了一个warden

京东用的是warden

网络信息

ifconfigeth0      Link encap:Ethernet  HWaddr 52:54:00:7d:6c:e5          inet addr:10.4.6.126  Bcast:10.4.255.255  Mask:255.255.0.0          inet6 addr: fe80::5054:ff:fe7d:6ce5/64 Scope:Link          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1          RX packets:775067148 errors:0 dropped:0 overruns:0 frame:0          TX packets:783086411 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:1000          RX bytes:228316621787 (228.3 GB)  TX bytes:111499404544 (111.4 GB)关闭后,再次启动,可能会分配到别的机器上运行eth0      Link encap:Ethernet  HWaddr 52:54:00:de:fa:83          inet addr:10.4.2.227  Bcast:10.4.255.255  Mask:255.255.0.0          inet6 addr: fe80::5054:ff:fede:fa83/64 Scope:Link          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1          RX packets:474094725 errors:0 dropped:0 overruns:0 frame:0          TX packets:479293517 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:1000

配置文件

cat /root/cloudfoundry/.deployments/devbox/config/dea.yml---# Base directory where all applications are staged and hostedbase_dir: /var/vcap.local/dea# Local_route is the IP address of a well known server on your network, it# is used to choose the right ip address (think of hosts that have multiple nics# and IP addresses assigned to them) of the host running the DEA. Default# value of nil, should work in most cases.local_route:# Port for accessing the files of running applicationsfiler_port: 12345# NATS message bus URImbus: nats://nats:nats@10.4.2.169:4222/intervals:  # Time interval between heartbeats sent to the Health Manager  heartbeat: 10logging:  level: debug# Maximum memory allocated to this DEA. In a multi tenant setup, this# memory is divided amongst all applications managed by this DEA.max_memory: 6144# Secure environment for running applications in a multi tenant setup.secure: true# Allow more than one application to run per DEAmulti_tenant: true# Provide ulimit based resource isolation in a multi tenant setup.enforce_ulimit: truepid: /var/vcap/sys/run/dea.pid#Force droplets to be downloaded over http even when#there is a shared directory containing the droplet.force_http_sharing: true# This is where the execution agent determines its available runtimes.# version flags are assumed to be '-v' unless noted below.runtimes:  - ruby18  - ruby19  - ruby193  - node  - node06  - node08  - java  - java7  - erlangR14B01  - php  - python2

第2次启动

26983    21460  0.0  0.0  35608  1280 ?        S    22:58   0:00 su -s /bin/bash vcap-user-398326983    21463  0.0  0.0  10748  1476 ?        S    22:58   0:00 bash26983    21465  0.0  0.0  10748  1484 ?        S    22:58   0:00 /bin/bash ./startup -p 2437526983    21494  0.0  0.2  41212 10544 ?        S    22:58   0:00 /usr/bin/python ../python/bin/gunicorn -c ../gunicorn.config wsgi:application26983    27489  0.0  0.0  19372  2100 ?        S    23:32   0:00 /bin/bash -i26983    27550  0.0  0.2  45460 11276 ?        S    23:33   0:00 /usr/bin/python ../python/bin/gunicorn -c ../gunicorn.config wsgi:application26983    28022  0.0  0.0  15268  1196 ?        R    23:34   0:00 ps aux26983    28023  0.0  0.0   7588   812 ?        S    23:34   0:00 fgrep 26983

在前版设计中,当Router接收到请求后,会随机分配一个Droplet来处理这个请求,这种方式使得用户没有办法使用Session,因为连续的HTTP请求会被分发到不同的应用实例上处理。新版本设计中增加了对SESSION的支持,当Router发现用户的请求中带了cookie信息,它会在Cookie里暗藏一个Droplet的host,port地址。当有新的请求进来,Router通过解析Cookie得到上次的应用实例,然后尽量转发到同一台Droplet上。

在原有的架构中,用户上传代码后,Cloud Control会将这部分代码结合CloudFoundry打包成DEA可以运行的格式,并上传到一个NFS中,当DEA启动的时候,会从NFS取到需要相应的包,然后再运行。由于打包(Stage)的过程,比较复杂还需要操作大量的文件,需要的时间比较长,单薄的CloudController不堪重负,所以将其移出,成为一个单独的进程。每当CloudController需要打包的时候,就会向Stage队列中发送一个请求,Stage收到请求后,逐个处理之。众所周知,不管是Java,Python还是Ruby程序都会有一系列的依赖,例如Ruby的Gem。每次打包的时候,都需要下载很多Gem,这是费时费力不讨好的。所以开发了PackageCache模块来缓存常用的依赖包。这样的话,打包的过程会顺畅很多。原先性能问题算是解决了。但CloudFoundry还是个注重高可用的系统,按照原先的设计,存放运行包的NFS是一处单点,一旦Crash,整个CloudFoundry的部署功能都将瘫痪。这是不能容忍的,而且越来越大的规模,一台机器迟早无法容纳全部的运行包。所以使用了BlobStore模块,来替代原先的NFS,提供高可用可扩展的存储服务。

再次换IP

eth0      Link encap:Ethernet  HWaddr 52:54:00:4f:22:75          inet addr:10.4.3.37  Bcast:10.4.255.255  Mask:255.255.0.0          inet6 addr: fe80::5054:ff:fe4f:2275/64 Scope:Link          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1          RX packets:955591231 errors:0 dropped:0 overruns:0 frame:0          TX packets:832230468 errors:0 dropped:45 overruns:45 carrier:0          collisions:0 txqueuelen:1000          RX bytes:211313506599 (211.3 GB)  TX bytes:4458700864140 (4.4 TB)

当CloudFoundry刚刚推出的时候, Droplet包含了应用运行时启动,停止等简单命令。用户应用可以随意访问文件系统,也可以在内网畅通无阻,跑满CPU,占尽内存,写满磁盘。你一切可以想到的破坏性操作都可以做到,太可怕了。CloudFoundry显然不会放任这样的情况太久,现在他们开发出了Warden,一个程序运行容器。这个容器提供了一个孤立的环境,Droplet只可以获得受限的CPU,内存,磁盘访问权限,网络权限,再没有办法搞破坏了。Warden在Linux上的实现是将Linux 内核的资源分成若干个namespace加以区分,底层的机制是CGROUP。这样的设计比虚拟机性能好,启动快,也能够获得足够的安全性。在网络方面,每一个Warden实例有一个虚拟网络接口,每个接口有一个IP,而DEA内有一个子网,这些网络接口就连在这个子网上。安全可以通过iptables来保证。在磁盘方面,每个warden实例有一个自己的filesystem。这些filesystem使用aufs实现的。Aufs可以共享warden之间的只读内容,区分只写的内容,提高了磁盘空间的利用率。因为aufs只能在固定大小的文件上读写,所以磁盘也没有出现写满的可能性。

BAE 3.0 它自称用的是LXC容器, 听介绍,非常类似于CF

MoPaas的WSGI配置

相关文章:

你感兴趣的文章:

标签云: