forked from saltstack/salt
-
Notifications
You must be signed in to change notification settings - Fork 0
/
nova.py
1483 lines (1223 loc) · 45.1 KB
/
nova.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
# -*- coding: utf-8 -*-
'''
OpenStack Nova Cloud Module
===========================
OpenStack is an open source project that is in use by a number a cloud
providers, each of which have their own ways of using it.
The OpenStack Nova module for Salt Cloud was bootstrapped from the OpenStack
module for Salt Cloud, which uses a libcloud-based connection. The Nova module
is designed to use the nova and glance modules already built into Salt.
These modules use the Python novaclient and glanceclient libraries,
respectively. In order to use this module, the proper salt configuration must
also be in place. This can be specified in the master config, the minion
config, a set of grains or a set of pillars.
.. code-block:: yaml
my_openstack_profile:
keystone.user: admin
keystone.password: verybadpass
keystone.tenant: admin
keystone.auth_url: 'http://127.0.0.1:5000/v2.0/'
Note that there is currently a dependency upon netaddr. This can be installed
on Debian-based systems by means of the python-netaddr package.
This module currently requires the latest develop branch of Salt to be
installed.
This module has been tested to work with HP Cloud and Rackspace. See the
documentation for specific options for either of these providers. These
examples could be set up in the cloud configuration at
``/etc/salt/cloud.providers`` or
``/etc/salt/cloud.providers.d/openstack.conf``:
.. code-block:: yaml
my-openstack-config:
# The name of the configuration profile to use on said minion
config_profile: my_openstack_profile
ssh_key_name: mykey
driver: nova
userdata_file: /tmp/userdata.txt
To use keystoneauth1 instead of keystoneclient, include the `use_keystoneauth`
option in the provider config.
.. note:: this is required to use keystone v3 as for authentication.
.. code-block:: yaml
my-openstack-config:
use_keystoneauth: True
identity_url: 'https://controller:5000/v3'
auth_version: 3
compute_name: nova
compute_region: RegionOne
service_type: compute
verify: '/path/to/custom/certs/ca-bundle.crt'
tenant: admin
user: admin
password: passwordgoeshere
driver: nova
Note: by default the nova driver will attempt to verify its connection
utilizing the system certificates. If you need to verify against another bundle
of CA certificates or want to skip verification altogether you will need to
specify the verify option. You can specify True or False to verify (or not)
against system certificates, a path to a bundle or CA certs to check against, or
None to allow keystoneauth to search for the certificates on its own.(defaults to True)
For local installations that only use private IP address ranges, the
following option may be useful. Using the old syntax:
Note: For api use, you will need an auth plugin. The base novaclient does not
support apikeys, but some providers such as rackspace have extended keystone to
accept them
.. code-block:: yaml
my-openstack-config:
# Ignore IP addresses on this network for bootstrap
ignore_cidr: 192.168.50.0/24
my-nova:
identity_url: 'https://identity.api.rackspacecloud.com/v2.0/'
compute_region: IAD
user: myusername
password: mypassword
tenant: <userid>
driver: nova
my-api:
identity_url: 'https://identity.api.rackspacecloud.com/v2.0/'
compute_region: IAD
user: myusername
api_key: <api_key>
os_auth_plugin: rackspace
tenant: <userid>
driver: nova
networks:
- net-id: 47a38ff2-fe21-4800-8604-42bd1848e743
- net-id: 00000000-0000-0000-0000-000000000000
- net-id: 11111111-1111-1111-1111-111111111111
This is an example profile.
.. code-block:: yaml
debian8-2-iad-cloudqe4:
provider: cloudqe4-iad
size: performance1-2
image: Debian 8 (Jessie) (PVHVM)
script_args: -UP -p python-zmq git 2015.8
and one using cinder volumes already attached
.. code-block:: yaml
# create the block storage device
centos7-2-iad-rackspace:
provider: rackspace-iad
size: general1-2
block_device:
- source: image
id: <image_id>
dest: volume
size: 100
shutdown: <preserve/remove>
bootindex: 0
# with the volume already created
centos7-2-iad-rackspace:
provider: rackspace-iad
size: general1-2
boot_volume: <volume id>
# create the volume from a snapshot
centos7-2-iad-rackspace:
provider: rackspace-iad
size: general1-2
snapshot: <cinder snapshot id>
# create the create an extra ephemeral disk
centos7-2-iad-rackspace:
provider: rackspace-iad
size: general1-2
ephemeral:
- size: 100
format: <swap/ext4>
# create the create an extra ephemeral disk
centos7-2-iad-rackspace:
provider: rackspace-iad
size: general1-2
swap: <size>
Block Device can also be used for having more than one block storage device attached
.. code-block:: yaml
centos7-2-iad-rackspace:
provider: rackspace-iad
size: general1-2
block_device:
- source: image
id: <image_id>
dest: volume
size: 100
shutdown: <preserve/remove>
bootindex: 0
- source: blank
dest: volume
device: xvdc
size: 100
shutdown: <preserve/remove>
Floating IPs can be auto assigned and ssh_interface can be set to fixed_ips, floating_ips, public_ips or private_ips
.. code-block:: yaml
centos7-2-iad-rackspace:
provider: rackspace-iad
size: general1-2
ssh_interface: floating_ips
floating_ip:
auto_assign: True
pool: public
Note: You must include the default net-ids when setting networks or the server
will be created without the rest of the interfaces
Note: For rackconnect v3, rackconnectv3 needs to be specified with the
rackconnect v3 cloud network as its variable.
'''
# pylint: disable=E0102
# Import python libs
from __future__ import absolute_import
import os
import logging
import socket
import pprint
import yaml
# Import Salt Libs
import salt.ext.six as six
import salt.utils
import salt.client
from salt.utils.openstack import nova
try:
import novaclient.exceptions
except ImportError as exc:
pass
# Import Salt Cloud Libs
from salt.cloud.libcloudfuncs import * # pylint: disable=W0614,W0401
import salt.utils.cloud
import salt.utils.pycrypto as sup
import salt.config as config
from salt.utils import namespaced_function
from salt.exceptions import (
SaltCloudConfigError,
SaltCloudNotFound,
SaltCloudSystemExit,
SaltCloudExecutionFailure,
SaltCloudExecutionTimeout
)
try:
from netaddr import all_matching_cidrs
HAS_NETADDR = True
except ImportError:
HAS_NETADDR = False
# Get logging started
log = logging.getLogger(__name__)
request_log = logging.getLogger('requests')
__virtualname__ = 'nova'
# Some of the libcloud functions need to be in the same namespace as the
# functions defined in the module, so we create new function objects inside
# this module namespace
script = namespaced_function(script, globals())
reboot = namespaced_function(reboot, globals())
# Only load in this module if the Nova configurations are in place
def __virtual__():
'''
Check for Nova configurations
'''
request_log.setLevel(getattr(logging, __opts__.get('requests_log_level', 'warning').upper()))
if get_configured_provider() is False:
return False
if get_dependencies() is False:
return False
return __virtualname__
def get_configured_provider():
'''
Return the first configured instance.
'''
return config.is_provider_configured(
__opts__,
__active_provider_name__ or __virtualname__,
('user', 'tenant', 'identity_url', 'compute_region',)
)
def get_dependencies():
'''
Warn if dependencies aren't met.
'''
deps = {
'netaddr': HAS_NETADDR,
'python-novaclient': nova.check_nova(),
}
return config.check_driver_dependencies(
__virtualname__,
deps
)
def get_conn():
'''
Return a conn object for the passed VM data
'''
vm_ = get_configured_provider()
kwargs = vm_.copy() # pylint: disable=E1103
kwargs['username'] = vm_['user']
kwargs['project_id'] = vm_['tenant']
kwargs['auth_url'] = vm_['identity_url']
kwargs['region_name'] = vm_['compute_region']
kwargs['use_keystoneauth'] = vm_['use_keystoneauth']
if 'password' in vm_:
kwargs['password'] = vm_['password']
if 'verify' in vm_ and vm_['use_keystoneauth'] is True:
kwargs['verify'] = vm_['verify']
elif 'verify' in vm_ and vm_['use_keystoneauth'] is False:
log.warning('SSL Certificate verification option is specified but use_keystoneauth is False or not present')
conn = nova.SaltNova(**kwargs)
return conn
def avail_locations(conn=None, call=None):
'''
Return a list of locations
'''
if call == 'action':
raise SaltCloudSystemExit(
'The avail_locations function must be called with '
'-f or --function, or with the --list-locations option'
)
if conn is None:
conn = get_conn()
endpoints = nova.get_entry(conn.get_catalog(), 'type', 'compute')['endpoints']
ret = {}
for endpoint in endpoints:
ret[endpoint['region']] = endpoint
return ret
def get_image(conn, vm_):
'''
Return the image object to use
'''
vm_image = config.get_cloud_config_value('image', vm_, __opts__, default='').encode(
'ascii', 'salt-cloud-force-ascii'
)
if not vm_image:
log.debug('No image set, must be boot from volume')
return None
image_list = conn.image_list()
for img in image_list:
if vm_image in (image_list[img]['id'], img):
return image_list[img]['id']
try:
image = conn.image_show(vm_image)
return image['id']
except novaclient.exceptions.NotFound as exc:
raise SaltCloudNotFound(
'The specified image, \'{0}\', could not be found: {1}'.format(
vm_image,
str(exc)
)
)
def get_block_mapping_opts(vm_):
ret = {}
ret['block_device_mapping'] = config.get_cloud_config_value('block_device_mapping', vm_, __opts__, default={})
ret['block_device'] = config.get_cloud_config_value('block_device', vm_, __opts__, default=[])
ret['ephemeral'] = config.get_cloud_config_value('ephemeral', vm_, __opts__, default=[])
ret['swap'] = config.get_cloud_config_value('swap', vm_, __opts__, default=None)
ret['snapshot'] = config.get_cloud_config_value('snapshot', vm_, __opts__, default=None)
ret['boot_volume'] = config.get_cloud_config_value('boot_volume', vm_, __opts__, default=None)
return ret
def show_instance(name, call=None):
'''
Show the details from the provider concerning an instance
'''
if call != 'action':
raise SaltCloudSystemExit(
'The show_instance action must be called with -a or --action.'
)
conn = get_conn()
node = conn.show_instance(name).__dict__
__utils__['cloud.cache_node'](node, __active_provider_name__, __opts__)
return node
def get_size(conn, vm_):
'''
Return the VM's size object
'''
sizes = conn.list_sizes()
vm_size = config.get_cloud_config_value('size', vm_, __opts__)
if not vm_size:
return sizes[0]
for size in sizes:
if vm_size and str(vm_size) in (str(sizes[size]['id']), str(size)):
return sizes[size]['id']
raise SaltCloudNotFound(
'The specified size, \'{0}\', could not be found.'.format(vm_size)
)
def preferred_ip(vm_, ips):
'''
Return the preferred Internet protocol. Either 'ipv4' (default) or 'ipv6'.
'''
proto = config.get_cloud_config_value(
'protocol', vm_, __opts__, default='ipv4', search_global=False
)
family = socket.AF_INET
if proto == 'ipv6':
family = socket.AF_INET6
for ip in ips:
try:
socket.inet_pton(family, ip)
return ip
except Exception:
continue
return False
def ignore_cidr(vm_, ip):
'''
Return True if we are to ignore the specified IP. Compatible with IPv4.
'''
if HAS_NETADDR is False:
log.error('Error: netaddr is not installed')
return 'Error: netaddr is not installed'
cidr = config.get_cloud_config_value(
'ignore_cidr', vm_, __opts__, default='', search_global=False
)
if cidr != '' and all_matching_cidrs(ip, [cidr]):
log.warning(
'IP "{0}" found within "{1}"; ignoring it.'.format(ip, cidr)
)
return True
return False
def ssh_interface(vm_):
'''
Return the ssh_interface type to connect to. Either 'public_ips' (default)
or 'private_ips'.
'''
return config.get_cloud_config_value(
'ssh_interface', vm_, __opts__, default='public_ips',
search_global=False
)
def rackconnect(vm_):
'''
Determine if we should wait for rackconnect automation before running.
Either 'False' (default) or 'True'.
'''
return config.get_cloud_config_value(
'rackconnect', vm_, __opts__, default=False,
search_global=False
)
def rackconnectv3(vm_):
'''
Determine if server is using rackconnectv3 or not
Return the rackconnect network name or False
'''
return config.get_cloud_config_value(
'rackconnectv3', vm_, __opts__, default=False,
search_global=False
)
def cloudnetwork(vm_):
'''
Determine if we should use an extra network to bootstrap
Either 'False' (default) or 'True'.
'''
return config.get_cloud_config_value(
'cloudnetwork', vm_, __opts__, default=False,
search_global=False
)
def managedcloud(vm_):
'''
Determine if we should wait for the managed cloud automation before
running. Either 'False' (default) or 'True'.
'''
return config.get_cloud_config_value(
'managedcloud', vm_, __opts__, default=False,
search_global=False
)
def destroy(name, conn=None, call=None):
'''
Delete a single VM
'''
if call == 'function':
raise SaltCloudSystemExit(
'The destroy action must be called with -d, --destroy, '
'-a or --action.'
)
__utils__['cloud.fire_event'](
'event',
'destroying instance',
'salt/cloud/{0}/destroying'.format(name),
args={'name': name},
sock_dir=__opts__['sock_dir'],
transport=__opts__['transport']
)
if not conn:
conn = get_conn() # pylint: disable=E0602
node = conn.server_by_name(name)
profiles = get_configured_provider()['profiles'] # pylint: disable=E0602
if node is None:
log.error('Unable to find the VM {0}'.format(name))
profile = None
if 'metadata' in node.extra and 'profile' in node.extra['metadata']:
profile = node.extra['metadata']['profile']
flush_mine_on_destroy = False
if profile and profile in profiles and 'flush_mine_on_destroy' in profiles[profile]:
flush_mine_on_destroy = profiles[profile]['flush_mine_on_destroy']
if flush_mine_on_destroy:
log.info('Clearing Salt Mine: {0}'.format(name))
salt_client = salt.client.get_local_client(__opts__['conf_file'])
minions = salt_client.cmd(name, 'mine.flush')
log.info('Clearing Salt Mine: {0}, {1}'.format(
name,
flush_mine_on_destroy
))
log.info('Destroying VM: {0}'.format(name))
ret = conn.delete(node.id)
if ret:
log.info('Destroyed VM: {0}'.format(name))
# Fire destroy action
__utils__['cloud.fire_event'](
'event',
'destroyed instance',
'salt/cloud/{0}/destroyed'.format(name),
args={'name': name},
sock_dir=__opts__['sock_dir'],
transport=__opts__['transport']
)
if __opts__.get('delete_sshkeys', False) is True:
salt.utils.cloud.remove_sshkey(getattr(node, __opts__.get('ssh_interface', 'public_ips'))[0])
if __opts__.get('update_cachedir', False) is True:
__utils__['cloud.delete_minion_cachedir'](name, __active_provider_name__.split(':')[0], __opts__)
__utils__['cloud.cachedir_index_del'](name)
return True
log.error('Failed to Destroy VM: {0}'.format(name))
return False
def request_instance(vm_=None, call=None):
'''
Put together all of the information necessary to request an instance
through Novaclient and then fire off the request the instance.
Returns data about the instance
'''
if call == 'function':
# Technically this function may be called other ways too, but it
# definitely cannot be called with --function.
raise SaltCloudSystemExit(
'The request_instance action must be called with -a or --action.'
)
log.info('Creating Cloud VM {0}'.format(vm_['name']))
salt.utils.cloud.check_name(vm_['name'], 'a-zA-Z0-9._-')
conn = get_conn()
kwargs = vm_.copy()
try:
kwargs['image_id'] = get_image(conn, vm_)
except Exception as exc:
raise SaltCloudSystemExit(
'Error creating {0} on OPENSTACK\n\n'
'Could not find image {1}: {2}\n'.format(
vm_['name'], vm_['image'], exc
)
)
try:
kwargs['flavor_id'] = get_size(conn, vm_)
except Exception as exc:
raise SaltCloudSystemExit(
'Error creating {0} on OPENSTACK\n\n'
'Could not find size {1}: {2}\n'.format(
vm_['name'], vm_['size'], exc
)
)
kwargs['key_name'] = config.get_cloud_config_value(
'ssh_key_name', vm_, __opts__, search_global=False
)
security_groups = config.get_cloud_config_value(
'security_groups', vm_, __opts__, search_global=False
)
if security_groups is not None:
vm_groups = security_groups
avail_groups = conn.secgroup_list()
group_list = []
for vmg in vm_groups:
if vmg in [name for name, details in six.iteritems(avail_groups)]:
group_list.append(vmg)
else:
raise SaltCloudNotFound(
'No such security group: \'{0}\''.format(vmg)
)
kwargs['security_groups'] = group_list
avz = config.get_cloud_config_value(
'availability_zone', vm_, __opts__, default=None, search_global=False
)
if avz is not None:
kwargs['availability_zone'] = avz
kwargs['nics'] = config.get_cloud_config_value(
'networks', vm_, __opts__, search_global=False, default=None
)
files = config.get_cloud_config_value(
'files', vm_, __opts__, search_global=False
)
if files:
kwargs['files'] = {}
for src_path in files:
if os.path.exists(files[src_path]):
with salt.utils.fopen(files[src_path], 'r') as fp_:
kwargs['files'][src_path] = fp_.read()
else:
kwargs['files'][src_path] = files[src_path]
userdata_file = config.get_cloud_config_value(
'userdata_file', vm_, __opts__, search_global=False, default=None
)
if userdata_file is not None:
try:
with salt.utils.fopen(userdata_file, 'r') as fp_:
kwargs['userdata'] = salt.utils.cloud.userdata_template(
__opts__, vm_, fp_.read()
)
except Exception as exc:
log.exception(
'Failed to read userdata from %s: %s', userdata_file, exc)
kwargs['config_drive'] = config.get_cloud_config_value(
'config_drive', vm_, __opts__, search_global=False
)
kwargs.update(get_block_mapping_opts(vm_))
__utils__['cloud.fire_event'](
'event',
'requesting instance',
'salt/cloud/{0}/requesting'.format(vm_['name']),
args={
'kwargs': {
'name': kwargs['name'],
'image': kwargs.get('image_id', 'Boot From Volume'),
'size': kwargs['flavor_id'],
}
},
sock_dir=__opts__['sock_dir'],
transport=__opts__['transport']
)
try:
data = conn.boot(**kwargs)
except Exception as exc:
raise SaltCloudSystemExit(
'Error creating {0} on Nova\n\n'
'The following exception was thrown by libcloud when trying to '
'run the initial deployment: {1}\n'.format(
vm_['name'], exc
)
)
if data.extra.get('password', None) is None and vm_.get('key_filename', None) is None:
raise SaltCloudSystemExit('No password returned. Set ssh_key_file.')
floating_ip_conf = config.get_cloud_config_value('floating_ip',
vm_,
__opts__,
search_global=False,
default={})
if floating_ip_conf.get('auto_assign', False):
floating_ip = None
if floating_ip_conf.get('ip_address', None) is not None:
ip_address = floating_ip_conf.get('ip_address', None)
try:
fl_ip_dict = conn.floating_ip_show(ip_address)
floating_ip = fl_ip_dict['ip']
except Exception as err:
raise SaltCloudSystemExit(
'Error assigning floating_ip for {0} on Nova\n\n'
'The following exception was thrown by libcloud when trying to '
'assign a floating ip: {1}\n'.format(
vm_['name'], err
)
)
else:
pool = floating_ip_conf.get('pool', 'public')
try:
floating_ip = conn.floating_ip_create(pool)['ip']
except Exception:
log.info('A new IP address was unable to be allocated. '
'An IP address will be pulled from the already allocated list, '
'This will cause a race condition when building in parallel.')
for fl_ip, opts in six.iteritems(conn.floating_ip_list()):
if opts['fixed_ip'] is None and opts['pool'] == pool:
floating_ip = fl_ip
break
if floating_ip is None:
log.error('No IP addresses available to allocate for this server: {0}'.format(vm_['name']))
def __query_node_data(vm_):
try:
node = show_instance(vm_['name'], 'action')
log.debug(
'Loaded node data for {0}:\n{1}'.format(
vm_['name'],
pprint.pformat(node)
)
)
except Exception as err:
log.error(
'Failed to get nodes list: {0}'.format(
err
),
# Show the traceback if the debug logging level is enabled
exc_info_on_loglevel=logging.DEBUG
)
# Trigger a failure in the wait for IP function
return False
return node['state'] == 'ACTIVE' or None
# if we associate the floating ip here,then we will fail.
# As if we attempt to associate a floating IP before the Nova instance has completed building,
# it will fail.So we should associate it after the Nova instance has completed building.
try:
salt.utils.cloud.wait_for_ip(
__query_node_data,
update_args=(vm_,)
)
except (SaltCloudExecutionTimeout, SaltCloudExecutionFailure) as exc:
try:
# It might be already up, let's destroy it!
destroy(vm_['name'])
except SaltCloudSystemExit:
pass
finally:
raise SaltCloudSystemExit(str(exc))
try:
conn.floating_ip_associate(vm_['name'], floating_ip)
vm_['floating_ip'] = floating_ip
except Exception as exc:
raise SaltCloudSystemExit(
'Error assigning floating_ip for {0} on Nova\n\n'
'The following exception was thrown by libcloud when trying to '
'assign a floating ip: {1}\n'.format(
vm_['name'], exc
)
)
if not vm_.get('password', None):
vm_['password'] = data.extra.get('password', '')
return data, vm_
def _query_node_data(vm_, data, conn):
try:
node = show_instance(vm_['name'], 'action')
log.debug('Loaded node data for {0}:'
'\n{1}'.format(vm_['name'], pprint.pformat(node)))
except Exception as err:
# Show the traceback if the debug logging level is enabled
log.error('Failed to get nodes list: {0}'.format(err),
exc_info_on_loglevel=logging.DEBUG)
# Trigger a failure in the wait for IP function
return False
running = node['state'] == 'ACTIVE'
if not running:
# Still not running, trigger another iteration
return
if rackconnect(vm_) is True:
extra = node.get('extra', {})
rc_status = extra.get('metadata', {}).get('rackconnect_automation_status', '')
if rc_status != 'DEPLOYED':
log.debug('Waiting for Rackconnect automation to complete')
return
if managedcloud(vm_) is True:
extra = conn.server_show_libcloud(node['id']).extra
mc_status = extra.get('metadata', {}).get('rax_service_level_automation', '')
if mc_status != 'Complete':
log.debug('Waiting for managed cloud automation to complete')
return
access_ip = node.get('extra', {}).get('access_ip', '')
rcv3 = rackconnectv3(vm_) in node['addresses']
sshif = ssh_interface(vm_) in node['addresses']
if any((rcv3, sshif)):
networkname = rackconnectv3(vm_) if rcv3 else ssh_interface(vm_)
for network in node['addresses'].get(networkname, []):
if network['version'] is 4:
access_ip = network['addr']
break
vm_['cloudnetwork'] = True
# Conditions to pass this
#
# Rackconnect v2: vm_['rackconnect'] = True
# If this is True, then the server will not be accessible from the ipv4 addres in public_ips.
# That interface gets turned off, and an ipv4 from the dedicated firewall is routed to the
# server. In this case we can use the private_ips for ssh_interface, or the access_ip.
#
# Rackconnect v3: vm['rackconnectv3'] = <cloudnetwork>
# If this is the case, salt will need to use the cloud network to login to the server. There
# is no ipv4 address automatically provisioned for these servers when they are booted. SaltCloud
# also cannot use the private_ips, because that traffic is dropped at the hypervisor.
#
# CloudNetwork: vm['cloudnetwork'] = True
# If this is True, then we should have an access_ip at this point set to the ip on the cloud
# network. If that network does not exist in the 'addresses' dictionary, then SaltCloud will
# use the initial access_ip, and not overwrite anything.
if (any((cloudnetwork(vm_), rackconnect(vm_)))
and (ssh_interface(vm_) != 'private_ips' or rcv3)
and access_ip != ''):
data.public_ips = [access_ip]
return data
result = []
if ('private_ips' not in node
and 'public_ips' not in node
and 'floating_ips' not in node
and 'fixed_ips' not in node
and 'access_ip' in node.get('extra', {})):
result = [node['extra']['access_ip']]
private = node.get('private_ips', [])
public = node.get('public_ips', [])
fixed = node.get('fixed_ips', [])
floating = node.get('floating_ips', [])
if private and not public:
log.warning('Private IPs returned, but not public. '
'Checking for misidentified IPs')
for private_ip in private:
private_ip = preferred_ip(vm_, [private_ip])
if private_ip is False:
continue
if salt.utils.cloud.is_public_ip(private_ip):
log.warning('{0} is a public IP'.format(private_ip))
data.public_ips.append(private_ip)
log.warning('Public IP address was not ready when we last checked. '
'Appending public IP address now.')
public = data.public_ips
else:
log.warning('{0} is a private IP'.format(private_ip))
ignore_ip = ignore_cidr(vm_, private_ip)
if private_ip not in data.private_ips and not ignore_ip:
result.append(private_ip)
# populate return data with private_ips
# when ssh_interface is set to private_ips and public_ips exist
if not result and ssh_interface(vm_) == 'private_ips':
for private_ip in private:
ignore_ip = ignore_cidr(vm_, private_ip)
if private_ip not in data.private_ips and not ignore_ip:
result.append(private_ip)
non_private_ips = []
if public:
data.public_ips = public
if ssh_interface(vm_) == 'public_ips':
non_private_ips.append(public)
if floating:
data.floating_ips = floating
if ssh_interface(vm_) == 'floating_ips':
non_private_ips.append(floating)
if fixed:
data.fixed_ips = fixed
if ssh_interface(vm_) == 'fixed_ips':
non_private_ips.append(fixed)
if non_private_ips:
log.debug('result = {0}'.format(non_private_ips))
data.private_ips = result
if ssh_interface(vm_) != 'private_ips':
return data
if result:
log.debug('result = {0}'.format(result))
data.private_ips = result
if ssh_interface(vm_) == 'private_ips':
return data
def create(vm_):
'''
Create a single VM from a data dict
'''
try:
# Check for required profile parameters before sending any API calls.
if vm_['profile'] and config.is_profile_configured(__opts__,
__active_provider_name__ or 'nova',
vm_['profile'],
vm_=vm_) is False:
return False
except AttributeError:
pass
deploy = config.get_cloud_config_value('deploy', vm_, __opts__)
key_filename = config.get_cloud_config_value(
'ssh_key_file', vm_, __opts__, search_global=False, default=None
)
if key_filename is not None and not os.path.isfile(key_filename):
raise SaltCloudConfigError(
'The defined ssh_key_file \'{0}\' does not exist'.format(
key_filename
)
)
vm_['key_filename'] = key_filename
# Since using "provider: <provider-engine>" is deprecated, alias provider
# to use driver: "driver: <provider-engine>"
if 'provider' in vm_:
vm_['driver'] = vm_.pop('provider')
__utils__['cloud.fire_event'](
'event',
'starting create',
'salt/cloud/{0}/creating'.format(vm_['name']),
args={
'name': vm_['name'],
'profile': vm_['profile'],
'provider': vm_['driver'],
},
sock_dir=__opts__['sock_dir'],
transport=__opts__['transport']
)
conn = get_conn()
if 'instance_id' in vm_:
# This was probably created via another process, and doesn't have
# things like salt keys created yet, so let's create them now.
if 'pub_key' not in vm_ and 'priv_key' not in vm_:
log.debug('Generating minion keys for \'{0[name]}\''.format(vm_))
vm_['priv_key'], vm_['pub_key'] = salt.utils.cloud.gen_keys(
salt.config.get_cloud_config_value(
'keysize',
vm_,
__opts__
)
)
data = conn.server_show_libcloud(vm_['instance_id'])
if vm_['key_filename'] is None and 'change_password' in __opts__ and __opts__['change_password'] is True:
vm_['password'] = sup.secure_password()
conn.root_password(vm_['instance_id'], vm_['password'])
else:
# Put together all of the information required to request the instance,