gyp: update gyp to 0.2.0

PR-URL: https://github.com/nodejs/node-gyp/pull/2092
Reviewed-By: Rod Vagg <rod@vagg.org>
This commit is contained in:
Ujjwal Sharma 2020-04-07 07:13:04 +05:30 committed by Rod Vagg
parent 9aed6286a3
commit ebc34ec823
No known key found for this signature in database
GPG key ID: C273792F7D83545D
60 changed files with 25980 additions and 22837 deletions

4
gyp/.flake8 Normal file
View file

@ -0,0 +1,4 @@
[flake8]
max-complexity = 10
max-line-length = 88
extend-ignore = E203,C901,E501

31
gyp/.github/workflows/Python_tests.yml vendored Normal file
View file

@ -0,0 +1,31 @@
# TODO: Enable os: windows-latest
# TODO: Enable python-version: 3.5
# TODO: Enable pytest --doctest-modules
name: Python_tests
on: [push, pull_request]
jobs:
Python_tests:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
max-parallel: 15
matrix:
os: [macos-latest, ubuntu-latest] # , windows-latest]
python-version: [2.7, 3.6, 3.7, 3.8] # 3.5,
steps:
- uses: actions/checkout@v1
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements_dev.txt
- name: Lint with flake8
run: flake8 . --count --show-source --statistics
- name: Test with pytest
run: pytest
# - name: Run doctests with pytest
# run: pytest --doctest-modules

144
gyp/.gitignore vendored
View file

@ -1 +1,143 @@
*.pyc
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# static files generated from Django application using `collectstatic`
media
static

View file

@ -13,3 +13,4 @@ Eric N. Vander Weele <ericvw@gmail.com>
Tom Freudenberg <th.freudenberg@gmail.com>
Julien Brianceau <jbriance@cisco.com>
Refael Ackermann <refack@gmail.com>
Ujjwal Sharma <ryzokuken@disroot.org>

4
gyp/CODE_OF_CONDUCT.md Normal file
View file

@ -0,0 +1,4 @@
# Code of Conduct
* [Node.js Code of Conduct](https://github.com/nodejs/admin/blob/master/CODE_OF_CONDUCT.md)
* [Node.js Moderation Policy](https://github.com/nodejs/admin/blob/master/Moderation-Policy.md)

32
gyp/CONTRIBUTING.md Normal file
View file

@ -0,0 +1,32 @@
# Contributing to gyp-next
## Code of Conduct
This project is bound to the [Node.js Code of Conduct](https://github.com/nodejs/admin/blob/master/CODE_OF_CONDUCT.md).
<a id="developers-certificate-of-origin"></a>
## Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
* (a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
* (b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
* (c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
* (d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.

View file

@ -1,3 +1,4 @@
Copyright (c) 2019 Ujjwal Sharma. All rights reserved.
Copyright (c) 2009 Google Inc. All rights reserved.
Redistribution and use in source and binary forms, with or without

View file

@ -1 +0,0 @@
*

View file

@ -1,138 +0,0 @@
# Copyright (c) 2012 Google Inc. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Top-level presubmit script for GYP.
See http://dev.chromium.org/developers/how-tos/depottools/presubmit-scripts
for more details about the presubmit API built into gcl.
"""
PYLINT_BLACKLIST = [
# TODO: fix me.
# From SCons, not done in google style.
'test/lib/TestCmd.py',
'test/lib/TestCommon.py',
'test/lib/TestGyp.py',
]
PYLINT_DISABLED_WARNINGS = [
# TODO: fix me.
# Many tests include modules they don't use.
'W0611',
# Possible unbalanced tuple unpacking with sequence.
'W0632',
# Attempting to unpack a non-sequence.
'W0633',
# Include order doesn't properly include local files?
'F0401',
# Some use of built-in names.
'W0622',
# Some unused variables.
'W0612',
# Operator not preceded/followed by space.
'C0323',
'C0322',
# Unnecessary semicolon.
'W0301',
# Unused argument.
'W0613',
# String has no effect (docstring in wrong place).
'W0105',
# map/filter on lambda could be replaced by comprehension.
'W0110',
# Use of eval.
'W0123',
# Comma not followed by space.
'C0324',
# Access to a protected member.
'W0212',
# Bad indent.
'W0311',
# Line too long.
'C0301',
# Undefined variable.
'E0602',
# Not exception type specified.
'W0702',
# No member of that name.
'E1101',
# Dangerous default {}.
'W0102',
# Cyclic import.
'R0401',
# Others, too many to sort.
'W0201', 'W0232', 'E1103', 'W0621', 'W0108', 'W0223', 'W0231',
'R0201', 'E0101', 'C0321',
# ************* Module copy
# W0104:427,12:_test.odict.__setitem__: Statement seems to have no effect
'W0104',
]
def _LicenseHeader(input_api):
# Accept any year number from 2009 to the current year.
current_year = int(input_api.time.strftime('%Y'))
allowed_years = (str(s) for s in reversed(range(2009, current_year + 1)))
years_re = '(' + '|'.join(allowed_years) + ')'
# The (c) is deprecated, but tolerate it until it's removed from all files.
return (
r'.*? Copyright (\(c\) )?%(year)s Google Inc\. All rights reserved\.\n'
r'.*? Use of this source code is governed by a BSD-style license that '
r'can be\n'
r'.*? found in the LICENSE file\.\n'
) % {
'year': years_re,
}
def CheckChangeOnUpload(input_api, output_api):
report = []
report.extend(input_api.canned_checks.PanProjectChecks(
input_api, output_api, license_header=_LicenseHeader(input_api)))
return report
def CheckChangeOnCommit(input_api, output_api):
report = []
report.extend(input_api.canned_checks.PanProjectChecks(
input_api, output_api, license_header=_LicenseHeader(input_api)))
report.extend(input_api.canned_checks.CheckTreeIsOpen(
input_api, output_api,
'http://gyp-status.appspot.com/status',
'http://gyp-status.appspot.com/current'))
import os
import sys
old_sys_path = sys.path
try:
sys.path = ['pylib', 'test/lib'] + sys.path
blacklist = PYLINT_BLACKLIST
if sys.platform == 'win32':
blacklist = [os.path.normpath(x).replace('\\', '\\\\')
for x in PYLINT_BLACKLIST]
report.extend(input_api.canned_checks.RunPylint(
input_api,
output_api,
black_list=blacklist,
disabled_warnings=PYLINT_DISABLED_WARNINGS))
finally:
sys.path = old_sys_path
return report
TRYBOTS = [
'linux_try',
'mac_try',
'win_try',
]
def GetPreferredTryMasters(_, change):
return {
'client.gyp': { t: set(['defaulttests']) for t in TRYBOTS },
}

0
gyp/gyp.bat Normal file → Executable file
View file

View file

@ -10,41 +10,42 @@ import subprocess
PY3 = bytes != str
# Below IsCygwin() function copied from pylib/gyp/common.py
def IsCygwin():
try:
out = subprocess.Popen("uname",
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
stdout, stderr = out.communicate()
if PY3:
stdout = stdout.decode("utf-8")
return "CYGWIN" in str(stdout)
except Exception:
return False
# Function copied from pylib/gyp/common.py
try:
out = subprocess.Popen(
"uname", stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
stdout, stderr = out.communicate()
if PY3:
stdout = stdout.decode("utf-8")
return "CYGWIN" in str(stdout)
except Exception:
return False
def UnixifyPath(path):
try:
if not IsCygwin():
return path
out = subprocess.Popen(["cygpath", "-u", path],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
stdout, _ = out.communicate()
if PY3:
stdout = stdout.decode("utf-8")
return str(stdout)
except Exception:
return path
try:
if not IsCygwin():
return path
out = subprocess.Popen(
["cygpath", "-u", path], stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
stdout, _ = out.communicate()
if PY3:
stdout = stdout.decode("utf-8")
return str(stdout)
except Exception:
return path
# Make sure we're using the version of pylib in this repo, not one installed
# elsewhere on the system. Also convert to Unix style path on Cygwin systems,
# else the 'gyp' library will not be found
path = UnixifyPath(sys.argv[0])
sys.path.insert(0, os.path.join(os.path.dirname(path), 'pylib'))
import gyp
sys.path.insert(0, os.path.join(os.path.dirname(path), "pylib"))
import gyp # noqa: E402
if __name__ == '__main__':
sys.exit(gyp.script_main())
if __name__ == "__main__":
sys.exit(gyp.script_main())

View file

@ -12,26 +12,28 @@ from operator import attrgetter
import gyp.common
try:
cmp
cmp
except NameError:
def cmp(x, y):
return (x > y) - (x < y)
def cmp(x, y):
return (x > y) - (x < y)
# Initialize random number generator
random.seed()
# GUIDs for project types
ENTRY_TYPE_GUIDS = {
'project': '{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}',
'folder': '{2150E333-8FDC-42A3-9474-1A3956D46DE8}',
"project": "{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}",
"folder": "{2150E333-8FDC-42A3-9474-1A3956D46DE8}",
}
#------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
# Helper functions
def MakeGuid(name, seed='msvs_new'):
"""Returns a GUID for the specified target name.
def MakeGuid(name, seed="msvs_new"):
"""Returns a GUID for the specified target name.
Args:
name: Target name.
@ -45,28 +47,39 @@ def MakeGuid(name, seed='msvs_new'):
determine the GUID to refer to explicitly. It also means that the GUID will
not change when the project for a target is rebuilt.
"""
# Calculate a MD5 signature for the seed and name.
d = hashlib.md5((str(seed) + str(name)).encode('utf-8')).hexdigest().upper()
# Convert most of the signature to GUID form (discard the rest)
guid = ('{' + d[:8] + '-' + d[8:12] + '-' + d[12:16] + '-' + d[16:20]
+ '-' + d[20:32] + '}')
return guid
# Calculate a MD5 signature for the seed and name.
d = hashlib.md5((str(seed) + str(name)).encode("utf-8")).hexdigest().upper()
# Convert most of the signature to GUID form (discard the rest)
guid = (
"{"
+ d[:8]
+ "-"
+ d[8:12]
+ "-"
+ d[12:16]
+ "-"
+ d[16:20]
+ "-"
+ d[20:32]
+ "}"
)
return guid
#------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
class MSVSSolutionEntry(object):
def __cmp__(self, other):
# Sort by name then guid (so things are in order on vs2008).
return cmp((self.name, self.get_guid()), (other.name, other.get_guid()))
def __cmp__(self, other):
# Sort by name then guid (so things are in order on vs2008).
return cmp((self.name, self.get_guid()), (other.name, other.get_guid()))
class MSVSFolder(MSVSSolutionEntry):
"""Folder in a Visual Studio project or solution."""
"""Folder in a Visual Studio project or solution."""
def __init__(self, path, name = None, entries = None,
guid = None, items = None):
"""Initializes the folder.
def __init__(self, path, name=None, entries=None, guid=None, items=None):
"""Initializes the folder.
Args:
path: Full path to the folder.
@ -77,38 +90,46 @@ class MSVSFolder(MSVSSolutionEntry):
items: List of solution items to include in the folder project. May be
None, if the folder does not directly contain items.
"""
if name:
self.name = name
else:
# Use last layer.
self.name = os.path.basename(path)
if name:
self.name = name
else:
# Use last layer.
self.name = os.path.basename(path)
self.path = path
self.guid = guid
self.path = path
self.guid = guid
# Copy passed lists (or set to empty lists)
self.entries = sorted(entries or [], key=attrgetter('path'))
self.items = list(items or [])
# Copy passed lists (or set to empty lists)
self.entries = sorted(entries or [], key=attrgetter("path"))
self.items = list(items or [])
self.entry_type_guid = ENTRY_TYPE_GUIDS['folder']
self.entry_type_guid = ENTRY_TYPE_GUIDS["folder"]
def get_guid(self):
if self.guid is None:
# Use consistent guids for folders (so things don't regenerate).
self.guid = MakeGuid(self.path, seed='msvs_folder')
return self.guid
def get_guid(self):
if self.guid is None:
# Use consistent guids for folders (so things don't regenerate).
self.guid = MakeGuid(self.path, seed="msvs_folder")
return self.guid
#------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
class MSVSProject(MSVSSolutionEntry):
"""Visual Studio project."""
"""Visual Studio project."""
def __init__(self, path, name = None, dependencies = None, guid = None,
spec = None, build_file = None, config_platform_overrides = None,
fixpath_prefix = None):
"""Initializes the project.
def __init__(
self,
path,
name=None,
dependencies=None,
guid=None,
spec=None,
build_file=None,
config_platform_overrides=None,
fixpath_prefix=None,
):
"""Initializes the project.
Args:
path: Absolute path to the project file.
@ -123,57 +144,59 @@ class MSVSProject(MSVSSolutionEntry):
used in place of the default for this target.
fixpath_prefix: the path used to adjust the behavior of _fixpath
"""
self.path = path
self.guid = guid
self.spec = spec
self.build_file = build_file
# Use project filename if name not specified
self.name = name or os.path.splitext(os.path.basename(path))[0]
self.path = path
self.guid = guid
self.spec = spec
self.build_file = build_file
# Use project filename if name not specified
self.name = name or os.path.splitext(os.path.basename(path))[0]
# Copy passed lists (or set to empty lists)
self.dependencies = list(dependencies or [])
# Copy passed lists (or set to empty lists)
self.dependencies = list(dependencies or [])
self.entry_type_guid = ENTRY_TYPE_GUIDS['project']
self.entry_type_guid = ENTRY_TYPE_GUIDS["project"]
if config_platform_overrides:
self.config_platform_overrides = config_platform_overrides
else:
self.config_platform_overrides = {}
self.fixpath_prefix = fixpath_prefix
self.msbuild_toolset = None
if config_platform_overrides:
self.config_platform_overrides = config_platform_overrides
else:
self.config_platform_overrides = {}
self.fixpath_prefix = fixpath_prefix
self.msbuild_toolset = None
def set_dependencies(self, dependencies):
self.dependencies = list(dependencies or [])
def set_dependencies(self, dependencies):
self.dependencies = list(dependencies or [])
def get_guid(self):
if self.guid is None:
# Set GUID from path
# TODO(rspangler): This is fragile.
# 1. We can't just use the project filename sans path, since there could
# be multiple projects with the same base name (for example,
# foo/unittest.vcproj and bar/unittest.vcproj).
# 2. The path needs to be relative to $SOURCE_ROOT, so that the project
# GUID is the same whether it's included from base/base.sln or
# foo/bar/baz/baz.sln.
# 3. The GUID needs to be the same each time this builder is invoked, so
# that we don't need to rebuild the solution when the project changes.
# 4. We should be able to handle pre-built project files by reading the
# GUID from the files.
self.guid = MakeGuid(self.name)
return self.guid
def get_guid(self):
if self.guid is None:
# Set GUID from path
# TODO(rspangler): This is fragile.
# 1. We can't just use the project filename sans path, since there could
# be multiple projects with the same base name (for example,
# foo/unittest.vcproj and bar/unittest.vcproj).
# 2. The path needs to be relative to $SOURCE_ROOT, so that the project
# GUID is the same whether it's included from base/base.sln or
# foo/bar/baz/baz.sln.
# 3. The GUID needs to be the same each time this builder is invoked, so
# that we don't need to rebuild the solution when the project changes.
# 4. We should be able to handle pre-built project files by reading the
# GUID from the files.
self.guid = MakeGuid(self.name)
return self.guid
def set_msbuild_toolset(self, msbuild_toolset):
self.msbuild_toolset = msbuild_toolset
def set_msbuild_toolset(self, msbuild_toolset):
self.msbuild_toolset = msbuild_toolset
#------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
class MSVSSolution(object):
"""Visual Studio solution."""
"""Visual Studio solution."""
def __init__(self, path, version, entries=None, variants=None,
websiteProperties=True):
"""Initializes the solution.
def __init__(
self, path, version, entries=None, variants=None, websiteProperties=True
):
"""Initializes the solution.
Args:
path: Path to solution file.
@ -185,152 +208,163 @@ class MSVSSolution(object):
websiteProperties: Flag to decide if the website properties section
is generated.
"""
self.path = path
self.websiteProperties = websiteProperties
self.version = version
self.path = path
self.websiteProperties = websiteProperties
self.version = version
# Copy passed lists (or set to empty lists)
self.entries = list(entries or [])
# Copy passed lists (or set to empty lists)
self.entries = list(entries or [])
if variants:
# Copy passed list
self.variants = variants[:]
else:
# Use default
self.variants = ['Debug|Win32', 'Release|Win32']
# TODO(rspangler): Need to be able to handle a mapping of solution config
# to project config. Should we be able to handle variants being a dict,
# or add a separate variant_map variable? If it's a dict, we can't
# guarantee the order of variants since dict keys aren't ordered.
if variants:
# Copy passed list
self.variants = variants[:]
else:
# Use default
self.variants = ["Debug|Win32", "Release|Win32"]
# TODO(rspangler): Need to be able to handle a mapping of solution config
# to project config. Should we be able to handle variants being a dict,
# or add a separate variant_map variable? If it's a dict, we can't
# guarantee the order of variants since dict keys aren't ordered.
# TODO(rspangler): Automatically write to disk for now; should delay until
# node-evaluation time.
self.Write()
# TODO(rspangler): Automatically write to disk for now; should delay until
# node-evaluation time.
self.Write()
def Write(self, writer=gyp.common.WriteOnDiff):
"""Writes the solution file to disk.
def Write(self, writer=gyp.common.WriteOnDiff):
"""Writes the solution file to disk.
Raises:
IndexError: An entry appears multiple times.
"""
# Walk the entry tree and collect all the folders and projects.
all_entries = set()
entries_to_check = self.entries[:]
while entries_to_check:
e = entries_to_check.pop(0)
# Walk the entry tree and collect all the folders and projects.
all_entries = set()
entries_to_check = self.entries[:]
while entries_to_check:
e = entries_to_check.pop(0)
# If this entry has been visited, nothing to do.
if e in all_entries:
continue
# If this entry has been visited, nothing to do.
if e in all_entries:
continue
all_entries.add(e)
all_entries.add(e)
# If this is a folder, check its entries too.
if isinstance(e, MSVSFolder):
entries_to_check += e.entries
# If this is a folder, check its entries too.
if isinstance(e, MSVSFolder):
entries_to_check += e.entries
all_entries = sorted(all_entries, key=attrgetter('path'))
all_entries = sorted(all_entries, key=attrgetter("path"))
# Open file and print header
f = writer(self.path)
f.write('Microsoft Visual Studio Solution File, '
'Format Version %s\r\n' % self.version.SolutionVersion())
f.write('# %s\r\n' % self.version.Description())
# Open file and print header
f = writer(self.path)
f.write(
"Microsoft Visual Studio Solution File, "
"Format Version %s\r\n" % self.version.SolutionVersion()
)
f.write("# %s\r\n" % self.version.Description())
# Project entries
sln_root = os.path.split(self.path)[0]
for e in all_entries:
relative_path = gyp.common.RelativePath(e.path, sln_root)
# msbuild does not accept an empty folder_name.
# use '.' in case relative_path is empty.
folder_name = relative_path.replace('/', '\\') or '.'
f.write('Project("%s") = "%s", "%s", "%s"\r\n' % (
e.entry_type_guid, # Entry type GUID
e.name, # Folder name
folder_name, # Folder name (again)
e.get_guid(), # Entry GUID
))
# Project entries
sln_root = os.path.split(self.path)[0]
for e in all_entries:
relative_path = gyp.common.RelativePath(e.path, sln_root)
# msbuild does not accept an empty folder_name.
# use '.' in case relative_path is empty.
folder_name = relative_path.replace("/", "\\") or "."
f.write(
'Project("%s") = "%s", "%s", "%s"\r\n'
% (
e.entry_type_guid, # Entry type GUID
e.name, # Folder name
folder_name, # Folder name (again)
e.get_guid(), # Entry GUID
)
)
# TODO(rspangler): Need a way to configure this stuff
if self.websiteProperties:
f.write('\tProjectSection(WebsiteProperties) = preProject\r\n'
'\t\tDebug.AspNetCompiler.Debug = "True"\r\n'
'\t\tRelease.AspNetCompiler.Debug = "False"\r\n'
'\tEndProjectSection\r\n')
# TODO(rspangler): Need a way to configure this stuff
if self.websiteProperties:
f.write(
"\tProjectSection(WebsiteProperties) = preProject\r\n"
'\t\tDebug.AspNetCompiler.Debug = "True"\r\n'
'\t\tRelease.AspNetCompiler.Debug = "False"\r\n'
"\tEndProjectSection\r\n"
)
if isinstance(e, MSVSFolder):
if e.items:
f.write('\tProjectSection(SolutionItems) = preProject\r\n')
for i in e.items:
f.write('\t\t%s = %s\r\n' % (i, i))
f.write('\tEndProjectSection\r\n')
if isinstance(e, MSVSFolder):
if e.items:
f.write("\tProjectSection(SolutionItems) = preProject\r\n")
for i in e.items:
f.write("\t\t%s = %s\r\n" % (i, i))
f.write("\tEndProjectSection\r\n")
if isinstance(e, MSVSProject):
if e.dependencies:
f.write('\tProjectSection(ProjectDependencies) = postProject\r\n')
for d in e.dependencies:
f.write('\t\t%s = %s\r\n' % (d.get_guid(), d.get_guid()))
f.write('\tEndProjectSection\r\n')
if isinstance(e, MSVSProject):
if e.dependencies:
f.write("\tProjectSection(ProjectDependencies) = postProject\r\n")
for d in e.dependencies:
f.write("\t\t%s = %s\r\n" % (d.get_guid(), d.get_guid()))
f.write("\tEndProjectSection\r\n")
f.write('EndProject\r\n')
f.write("EndProject\r\n")
# Global section
f.write('Global\r\n')
# Global section
f.write("Global\r\n")
# Configurations (variants)
f.write('\tGlobalSection(SolutionConfigurationPlatforms) = preSolution\r\n')
for v in self.variants:
f.write('\t\t%s = %s\r\n' % (v, v))
f.write('\tEndGlobalSection\r\n')
# Configurations (variants)
f.write("\tGlobalSection(SolutionConfigurationPlatforms) = preSolution\r\n")
for v in self.variants:
f.write("\t\t%s = %s\r\n" % (v, v))
f.write("\tEndGlobalSection\r\n")
# Sort config guids for easier diffing of solution changes.
config_guids = []
config_guids_overrides = {}
for e in all_entries:
if isinstance(e, MSVSProject):
config_guids.append(e.get_guid())
config_guids_overrides[e.get_guid()] = e.config_platform_overrides
config_guids.sort()
# Sort config guids for easier diffing of solution changes.
config_guids = []
config_guids_overrides = {}
for e in all_entries:
if isinstance(e, MSVSProject):
config_guids.append(e.get_guid())
config_guids_overrides[e.get_guid()] = e.config_platform_overrides
config_guids.sort()
f.write('\tGlobalSection(ProjectConfigurationPlatforms) = postSolution\r\n')
for g in config_guids:
for v in self.variants:
nv = config_guids_overrides[g].get(v, v)
# Pick which project configuration to build for this solution
# configuration.
f.write('\t\t%s.%s.ActiveCfg = %s\r\n' % (
g, # Project GUID
v, # Solution build configuration
nv, # Project build config for that solution config
))
f.write("\tGlobalSection(ProjectConfigurationPlatforms) = postSolution\r\n")
for g in config_guids:
for v in self.variants:
nv = config_guids_overrides[g].get(v, v)
# Pick which project configuration to build for this solution
# configuration.
f.write(
"\t\t%s.%s.ActiveCfg = %s\r\n"
% (
g, # Project GUID
v, # Solution build configuration
nv, # Project build config for that solution config
)
)
# Enable project in this solution configuration.
f.write('\t\t%s.%s.Build.0 = %s\r\n' % (
g, # Project GUID
v, # Solution build configuration
nv, # Project build config for that solution config
))
f.write('\tEndGlobalSection\r\n')
# Enable project in this solution configuration.
f.write(
"\t\t%s.%s.Build.0 = %s\r\n"
% (
g, # Project GUID
v, # Solution build configuration
nv, # Project build config for that solution config
)
)
f.write("\tEndGlobalSection\r\n")
# TODO(rspangler): Should be able to configure this stuff too (though I've
# never seen this be any different)
f.write('\tGlobalSection(SolutionProperties) = preSolution\r\n')
f.write('\t\tHideSolutionNode = FALSE\r\n')
f.write('\tEndGlobalSection\r\n')
# TODO(rspangler): Should be able to configure this stuff too (though I've
# never seen this be any different)
f.write("\tGlobalSection(SolutionProperties) = preSolution\r\n")
f.write("\t\tHideSolutionNode = FALSE\r\n")
f.write("\tEndGlobalSection\r\n")
# Folder mappings
# Omit this section if there are no folders
if any([e.entries for e in all_entries if isinstance(e, MSVSFolder)]):
f.write('\tGlobalSection(NestedProjects) = preSolution\r\n')
for e in all_entries:
if not isinstance(e, MSVSFolder):
continue # Does not apply to projects, only folders
for subentry in e.entries:
f.write('\t\t%s = %s\r\n' % (subentry.get_guid(), e.get_guid()))
f.write('\tEndGlobalSection\r\n')
# Folder mappings
# Omit this section if there are no folders
if any([e.entries for e in all_entries if isinstance(e, MSVSFolder)]):
f.write("\tGlobalSection(NestedProjects) = preSolution\r\n")
for e in all_entries:
if not isinstance(e, MSVSFolder):
continue # Does not apply to projects, only folders
for subentry in e.entries:
f.write("\t\t%s = %s\r\n" % (subentry.get_guid(), e.get_guid()))
f.write("\tEndGlobalSection\r\n")
f.write('EndGlobal\r\n')
f.write("EndGlobal\r\n")
f.close()
f.close()

View file

@ -4,55 +4,55 @@
"""Visual Studio project reader/writer."""
import gyp.common
import gyp.easy_xml as easy_xml
#------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
class Tool(object):
"""Visual Studio tool."""
"""Visual Studio tool."""
def __init__(self, name, attrs=None):
"""Initializes the tool.
def __init__(self, name, attrs=None):
"""Initializes the tool.
Args:
name: Tool name.
attrs: Dict of tool attributes; may be None.
"""
self._attrs = attrs or {}
self._attrs['Name'] = name
self._attrs = attrs or {}
self._attrs["Name"] = name
def _GetSpecification(self):
"""Creates an element for the tool.
def _GetSpecification(self):
"""Creates an element for the tool.
Returns:
A new xml.dom.Element for the tool.
"""
return ['Tool', self._attrs]
return ["Tool", self._attrs]
class Filter(object):
"""Visual Studio filter - that is, a virtual folder."""
"""Visual Studio filter - that is, a virtual folder."""
def __init__(self, name, contents=None):
"""Initializes the folder.
def __init__(self, name, contents=None):
"""Initializes the folder.
Args:
name: Filter (folder) name.
contents: List of filenames and/or Filter objects contained.
"""
self.name = name
self.contents = list(contents or [])
self.name = name
self.contents = list(contents or [])
#------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
class Writer(object):
"""Visual Studio XML project writer."""
"""Visual Studio XML project writer."""
def __init__(self, project_path, version, name, guid=None, platforms=None):
"""Initializes the project.
def __init__(self, project_path, version, name, guid=None, platforms=None):
"""Initializes the project.
Args:
project_path: Path to the project file.
@ -61,36 +61,36 @@ class Writer(object):
guid: GUID to use for project, if not None.
platforms: Array of string, the supported platforms. If null, ['Win32']
"""
self.project_path = project_path
self.version = version
self.name = name
self.guid = guid
self.project_path = project_path
self.version = version
self.name = name
self.guid = guid
# Default to Win32 for platforms.
if not platforms:
platforms = ['Win32']
# Default to Win32 for platforms.
if not platforms:
platforms = ["Win32"]
# Initialize the specifications of the various sections.
self.platform_section = ['Platforms']
for platform in platforms:
self.platform_section.append(['Platform', {'Name': platform}])
self.tool_files_section = ['ToolFiles']
self.configurations_section = ['Configurations']
self.files_section = ['Files']
# Initialize the specifications of the various sections.
self.platform_section = ["Platforms"]
for platform in platforms:
self.platform_section.append(["Platform", {"Name": platform}])
self.tool_files_section = ["ToolFiles"]
self.configurations_section = ["Configurations"]
self.files_section = ["Files"]
# Keep a dict keyed on filename to speed up access.
self.files_dict = dict()
# Keep a dict keyed on filename to speed up access.
self.files_dict = dict()
def AddToolFile(self, path):
"""Adds a tool file to the project.
def AddToolFile(self, path):
"""Adds a tool file to the project.
Args:
path: Relative path from project to tool file.
"""
self.tool_files_section.append(['ToolFile', {'RelativePath': path}])
self.tool_files_section.append(["ToolFile", {"RelativePath": path}])
def _GetSpecForConfiguration(self, config_type, config_name, attrs, tools):
"""Returns the specification for a configuration.
def _GetSpecForConfiguration(self, config_type, config_name, attrs, tools):
"""Returns the specification for a configuration.
Args:
config_type: Type of configuration node.
@ -99,40 +99,39 @@ class Writer(object):
tools: List of tools (strings or Tool objects); may be None.
Returns:
"""
# Handle defaults
if not attrs:
attrs = {}
if not tools:
tools = []
# Handle defaults
if not attrs:
attrs = {}
if not tools:
tools = []
# Add configuration node and its attributes
node_attrs = attrs.copy()
node_attrs['Name'] = config_name
specification = [config_type, node_attrs]
# Add configuration node and its attributes
node_attrs = attrs.copy()
node_attrs["Name"] = config_name
specification = [config_type, node_attrs]
# Add tool nodes and their attributes
if tools:
for t in tools:
if isinstance(t, Tool):
specification.append(t._GetSpecification())
else:
specification.append(Tool(t)._GetSpecification())
return specification
# Add tool nodes and their attributes
if tools:
for t in tools:
if isinstance(t, Tool):
specification.append(t._GetSpecification())
else:
specification.append(Tool(t)._GetSpecification())
return specification
def AddConfig(self, name, attrs=None, tools=None):
"""Adds a configuration to the project.
def AddConfig(self, name, attrs=None, tools=None):
"""Adds a configuration to the project.
Args:
name: Configuration name.
attrs: Dict of configuration attributes; may be None.
tools: List of tools (strings or Tool objects); may be None.
"""
spec = self._GetSpecForConfiguration('Configuration', name, attrs, tools)
self.configurations_section.append(spec)
spec = self._GetSpecForConfiguration("Configuration", name, attrs, tools)
self.configurations_section.append(spec)
def _AddFilesToNode(self, parent, files):
"""Adds files and/or filters to the parent node.
def _AddFilesToNode(self, parent, files):
"""Adds files and/or filters to the parent node.
Args:
parent: Destination node
@ -140,17 +139,17 @@ class Writer(object):
Will call itself recursively, if the files list contains Filter objects.
"""
for f in files:
if isinstance(f, Filter):
node = ['Filter', {'Name': f.name}]
self._AddFilesToNode(node, f.contents)
else:
node = ['File', {'RelativePath': f}]
self.files_dict[f] = node
parent.append(node)
for f in files:
if isinstance(f, Filter):
node = ["Filter", {"Name": f.name}]
self._AddFilesToNode(node, f.contents)
else:
node = ["File", {"RelativePath": f}]
self.files_dict[f] = node
parent.append(node)
def AddFiles(self, files):
"""Adds files to the project.
def AddFiles(self, files):
"""Adds files to the project.
Args:
files: A list of Filter objects and/or relative paths to files.
@ -159,12 +158,12 @@ class Writer(object):
later add files to a Filter object which was passed into a previous call
to AddFiles(), it will not be reflected in this project.
"""
self._AddFilesToNode(self.files_section, files)
# TODO(rspangler) This also doesn't handle adding files to an existing
# filter. That is, it doesn't merge the trees.
self._AddFilesToNode(self.files_section, files)
# TODO(rspangler) This also doesn't handle adding files to an existing
# filter. That is, it doesn't merge the trees.
def AddFileConfig(self, path, config, attrs=None, tools=None):
"""Adds a configuration to a file.
def AddFileConfig(self, path, config, attrs=None, tools=None):
"""Adds a configuration to a file.
Args:
path: Relative path to the file.
@ -175,34 +174,33 @@ class Writer(object):
Raises:
ValueError: Relative path does not match any file added via AddFiles().
"""
# Find the file node with the right relative path
parent = self.files_dict.get(path)
if not parent:
raise ValueError('AddFileConfig: file "%s" not in project.' % path)
# Find the file node with the right relative path
parent = self.files_dict.get(path)
if not parent:
raise ValueError('AddFileConfig: file "%s" not in project.' % path)
# Add the config to the file node
spec = self._GetSpecForConfiguration('FileConfiguration', config, attrs,
tools)
parent.append(spec)
# Add the config to the file node
spec = self._GetSpecForConfiguration("FileConfiguration", config, attrs, tools)
parent.append(spec)
def WriteIfChanged(self):
"""Writes the project file."""
# First create XML content definition
content = [
'VisualStudioProject',
{'ProjectType': 'Visual C++',
'Version': self.version.ProjectVersion(),
'Name': self.name,
'ProjectGUID': self.guid,
'RootNamespace': self.name,
'Keyword': 'Win32Proj'
},
self.platform_section,
self.tool_files_section,
self.configurations_section,
['References'], # empty section
self.files_section,
['Globals'] # empty section
]
easy_xml.WriteXmlIfChanged(content, self.project_path,
encoding="Windows-1252")
def WriteIfChanged(self):
"""Writes the project file."""
# First create XML content definition
content = [
"VisualStudioProject",
{
"ProjectType": "Visual C++",
"Version": self.version.ProjectVersion(),
"Name": self.name,
"ProjectGUID": self.guid,
"RootNamespace": self.name,
"Keyword": "Win32Proj",
},
self.platform_section,
self.tool_files_section,
self.configurations_section,
["References"], # empty section
self.files_section,
["Globals"], # empty section
]
easy_xml.WriteXmlIfChanged(content, self.project_path, encoding="Windows-1252")

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -4,28 +4,27 @@
"""Visual Studio project reader/writer."""
import gyp.common
import gyp.easy_xml as easy_xml
class Writer(object):
"""Visual Studio XML tool file writer."""
"""Visual Studio XML tool file writer."""
def __init__(self, tool_file_path, name):
"""Initializes the tool file.
def __init__(self, tool_file_path, name):
"""Initializes the tool file.
Args:
tool_file_path: Path to the tool file.
name: Name of the tool file.
"""
self.tool_file_path = tool_file_path
self.name = name
self.rules_section = ['Rules']
self.tool_file_path = tool_file_path
self.name = name
self.rules_section = ["Rules"]
def AddCustomBuildRule(self, name, cmd, description,
additional_dependencies,
outputs, extensions):
"""Adds a rule to the tool file.
def AddCustomBuildRule(
self, name, cmd, description, additional_dependencies, outputs, extensions
):
"""Adds a rule to the tool file.
Args:
name: Name of the rule.
@ -35,24 +34,26 @@ class Writer(object):
outputs: outputs of the rule.
extensions: extensions handled by the rule.
"""
rule = ['CustomBuildRule',
{'Name': name,
'ExecutionDescription': description,
'CommandLine': cmd,
'Outputs': ';'.join(outputs),
'FileExtensions': ';'.join(extensions),
'AdditionalDependencies':
';'.join(additional_dependencies)
}]
self.rules_section.append(rule)
rule = [
"CustomBuildRule",
{
"Name": name,
"ExecutionDescription": description,
"CommandLine": cmd,
"Outputs": ";".join(outputs),
"FileExtensions": ";".join(extensions),
"AdditionalDependencies": ";".join(additional_dependencies),
},
]
self.rules_section.append(rule)
def WriteIfChanged(self):
"""Writes the tool file."""
content = ['VisualStudioToolFile',
{'Version': '8.00',
'Name': self.name
},
self.rules_section
]
easy_xml.WriteXmlIfChanged(content, self.tool_file_path,
encoding="Windows-1252")
def WriteIfChanged(self):
"""Writes the tool file."""
content = [
"VisualStudioToolFile",
{"Version": "8.00", "Name": self.name},
self.rules_section,
]
easy_xml.WriteXmlIfChanged(
content, self.tool_file_path, encoding="Windows-1252"
)

View file

@ -6,78 +6,81 @@
import os
import re
import socket # for gethostname
import socket # for gethostname
import gyp.common
import gyp.easy_xml as easy_xml
#------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
def _FindCommandInPath(command):
"""If there are no slashes in the command given, this function
"""If there are no slashes in the command given, this function
searches the PATH env to find the given command, and converts it
to an absolute path. We have to do this because MSVS is looking
for an actual file to launch a debugger on, not just a command
line. Note that this happens at GYP time, so anything needing to
be built needs to have a full path."""
if '/' in command or '\\' in command:
# If the command already has path elements (either relative or
# absolute), then assume it is constructed properly.
if "/" in command or "\\" in command:
# If the command already has path elements (either relative or
# absolute), then assume it is constructed properly.
return command
else:
# Search through the path list and find an existing file that
# we can access.
paths = os.environ.get("PATH", "").split(os.pathsep)
for path in paths:
item = os.path.join(path, command)
if os.path.isfile(item) and os.access(item, os.X_OK):
return item
return command
else:
# Search through the path list and find an existing file that
# we can access.
paths = os.environ.get('PATH','').split(os.pathsep)
for path in paths:
item = os.path.join(path, command)
if os.path.isfile(item) and os.access(item, os.X_OK):
return item
return command
def _QuoteWin32CommandLineArgs(args):
new_args = []
for arg in args:
# Replace all double-quotes with double-double-quotes to escape
# them for cmd shell, and then quote the whole thing if there
# are any.
if arg.find('"') != -1:
arg = '""'.join(arg.split('"'))
arg = '"%s"' % arg
new_args = []
for arg in args:
# Replace all double-quotes with double-double-quotes to escape
# them for cmd shell, and then quote the whole thing if there
# are any.
if arg.find('"') != -1:
arg = '""'.join(arg.split('"'))
arg = '"%s"' % arg
# Otherwise, if there are any spaces, quote the whole arg.
elif re.search(r"[ \t\n]", arg):
arg = '"%s"' % arg
new_args.append(arg)
return new_args
# Otherwise, if there are any spaces, quote the whole arg.
elif re.search(r'[ \t\n]', arg):
arg = '"%s"' % arg
new_args.append(arg)
return new_args
class Writer(object):
"""Visual Studio XML user user file writer."""
"""Visual Studio XML user user file writer."""
def __init__(self, user_file_path, version, name):
"""Initializes the user file.
def __init__(self, user_file_path, version, name):
"""Initializes the user file.
Args:
user_file_path: Path to the user file.
version: Version info.
name: Name of the user file.
"""
self.user_file_path = user_file_path
self.version = version
self.name = name
self.configurations = {}
self.user_file_path = user_file_path
self.version = version
self.name = name
self.configurations = {}
def AddConfig(self, name):
"""Adds a configuration to the project.
def AddConfig(self, name):
"""Adds a configuration to the project.
Args:
name: Configuration name.
"""
self.configurations[name] = ['Configuration', {'Name': name}]
self.configurations[name] = ["Configuration", {"Name": name}]
def AddDebugSettings(self, config_name, command, environment = {},
working_directory=""):
"""Adds a DebugSettings node to the user file for a particular config.
def AddDebugSettings(
self, config_name, command, environment={}, working_directory=""
):
"""Adds a DebugSettings node to the user file for a particular config.
Args:
command: command line to run. First element in the list is the
@ -85,63 +88,66 @@ class Writer(object):
necessary.
working_directory: other files which may trigger the rule. (optional)
"""
command = _QuoteWin32CommandLineArgs(command)
command = _QuoteWin32CommandLineArgs(command)
abs_command = _FindCommandInPath(command[0])
abs_command = _FindCommandInPath(command[0])
if environment and isinstance(environment, dict):
env_list = ['%s="%s"' % (key, val)
for (key,val) in environment.items()]
environment = ' '.join(env_list)
else:
environment = ''
if environment and isinstance(environment, dict):
env_list = ['%s="%s"' % (key, val) for (key, val) in environment.items()]
environment = " ".join(env_list)
else:
environment = ""
n_cmd = ['DebugSettings',
{'Command': abs_command,
'WorkingDirectory': working_directory,
'CommandArguments': " ".join(command[1:]),
'RemoteMachine': socket.gethostname(),
'Environment': environment,
'EnvironmentMerge': 'true',
# Currently these are all "dummy" values that we're just setting
# in the default manner that MSVS does it. We could use some of
# these to add additional capabilities, I suppose, but they might
# not have parity with other platforms then.
'Attach': 'false',
'DebuggerType': '3', # 'auto' debugger
'Remote': '1',
'RemoteCommand': '',
'HttpUrl': '',
'PDBPath': '',
'SQLDebugging': '',
'DebuggerFlavor': '0',
'MPIRunCommand': '',
'MPIRunArguments': '',
'MPIRunWorkingDirectory': '',
'ApplicationCommand': '',
'ApplicationArguments': '',
'ShimCommand': '',
'MPIAcceptMode': '',
'MPIAcceptFilter': ''
}]
n_cmd = [
"DebugSettings",
{
"Command": abs_command,
"WorkingDirectory": working_directory,
"CommandArguments": " ".join(command[1:]),
"RemoteMachine": socket.gethostname(),
"Environment": environment,
"EnvironmentMerge": "true",
# Currently these are all "dummy" values that we're just setting
# in the default manner that MSVS does it. We could use some of
# these to add additional capabilities, I suppose, but they might
# not have parity with other platforms then.
"Attach": "false",
"DebuggerType": "3", # 'auto' debugger
"Remote": "1",
"RemoteCommand": "",
"HttpUrl": "",
"PDBPath": "",
"SQLDebugging": "",
"DebuggerFlavor": "0",
"MPIRunCommand": "",
"MPIRunArguments": "",
"MPIRunWorkingDirectory": "",
"ApplicationCommand": "",
"ApplicationArguments": "",
"ShimCommand": "",
"MPIAcceptMode": "",
"MPIAcceptFilter": "",
},
]
# Find the config, and add it if it doesn't exist.
if config_name not in self.configurations:
self.AddConfig(config_name)
# Find the config, and add it if it doesn't exist.
if config_name not in self.configurations:
self.AddConfig(config_name)
# Add the DebugSettings onto the appropriate config.
self.configurations[config_name].append(n_cmd)
# Add the DebugSettings onto the appropriate config.
self.configurations[config_name].append(n_cmd)
def WriteIfChanged(self):
"""Writes the user file."""
configs = ['Configurations']
for config, spec in sorted(self.configurations.items()):
configs.append(spec)
def WriteIfChanged(self):
"""Writes the user file."""
configs = ["Configurations"]
for config, spec in sorted(self.configurations.items()):
configs.append(spec)
content = ['VisualStudioUserFile',
{'Version': self.version.ProjectVersion(),
'Name': self.name
},
configs]
easy_xml.WriteXmlIfChanged(content, self.user_file_path,
encoding="Windows-1252")
content = [
"VisualStudioUserFile",
{"Version": self.version.ProjectVersion(), "Name": self.name},
configs,
]
easy_xml.WriteXmlIfChanged(
content, self.user_file_path, encoding="Windows-1252"
)

View file

@ -10,25 +10,25 @@ import os
# A dictionary mapping supported target types to extensions.
TARGET_TYPE_EXT = {
'executable': 'exe',
'loadable_module': 'dll',
'shared_library': 'dll',
'static_library': 'lib',
'windows_driver': 'sys',
"executable": "exe",
"loadable_module": "dll",
"shared_library": "dll",
"static_library": "lib",
"windows_driver": "sys",
}
def _GetLargePdbShimCcPath():
"""Returns the path of the large_pdb_shim.cc file."""
this_dir = os.path.abspath(os.path.dirname(__file__))
src_dir = os.path.abspath(os.path.join(this_dir, '..', '..'))
win_data_dir = os.path.join(src_dir, 'data', 'win')
large_pdb_shim_cc = os.path.join(win_data_dir, 'large-pdb-shim.cc')
return large_pdb_shim_cc
"""Returns the path of the large_pdb_shim.cc file."""
this_dir = os.path.abspath(os.path.dirname(__file__))
src_dir = os.path.abspath(os.path.join(this_dir, "..", ".."))
win_data_dir = os.path.join(src_dir, "data", "win")
large_pdb_shim_cc = os.path.join(win_data_dir, "large-pdb-shim.cc")
return large_pdb_shim_cc
def _DeepCopySomeKeys(in_dict, keys):
"""Performs a partial deep-copy on |in_dict|, only copying the keys in |keys|.
"""Performs a partial deep-copy on |in_dict|, only copying the keys in |keys|.
Arguments:
in_dict: The dictionary to copy.
@ -37,16 +37,16 @@ def _DeepCopySomeKeys(in_dict, keys):
Returns:
The partially deep-copied dictionary.
"""
d = {}
for key in keys:
if key not in in_dict:
continue
d[key] = copy.deepcopy(in_dict[key])
return d
d = {}
for key in keys:
if key not in in_dict:
continue
d[key] = copy.deepcopy(in_dict[key])
return d
def _SuffixName(name, suffix):
"""Add a suffix to the end of a target.
"""Add a suffix to the end of a target.
Arguments:
name: name of the target (foo#target)
@ -54,13 +54,13 @@ def _SuffixName(name, suffix):
Returns:
Target name with suffix added (foo_suffix#target)
"""
parts = name.rsplit('#', 1)
parts[0] = '%s_%s' % (parts[0], suffix)
return '#'.join(parts)
parts = name.rsplit("#", 1)
parts[0] = "%s_%s" % (parts[0], suffix)
return "#".join(parts)
def _ShardName(name, number):
"""Add a shard number to the end of a target.
"""Add a shard number to the end of a target.
Arguments:
name: name of the target (foo#target)
@ -68,11 +68,11 @@ def _ShardName(name, number):
Returns:
Target name with shard added (foo_1#target)
"""
return _SuffixName(name, str(number))
return _SuffixName(name, str(number))
def ShardTargets(target_list, target_dicts):
"""Shard some targets apart to work around the linkers limits.
"""Shard some targets apart to work around the linkers limits.
Arguments:
target_list: List of target pairs: 'base/base.gyp:base'.
@ -80,54 +80,55 @@ def ShardTargets(target_list, target_dicts):
Returns:
Tuple of the new sharded versions of the inputs.
"""
# Gather the targets to shard, and how many pieces.
targets_to_shard = {}
for t in target_dicts:
shards = int(target_dicts[t].get('msvs_shard', 0))
if shards:
targets_to_shard[t] = shards
# Shard target_list.
new_target_list = []
for t in target_list:
if t in targets_to_shard:
for i in range(targets_to_shard[t]):
new_target_list.append(_ShardName(t, i))
else:
new_target_list.append(t)
# Shard target_dict.
new_target_dicts = {}
for t in target_dicts:
if t in targets_to_shard:
for i in range(targets_to_shard[t]):
name = _ShardName(t, i)
new_target_dicts[name] = copy.copy(target_dicts[t])
new_target_dicts[name]['target_name'] = _ShardName(
new_target_dicts[name]['target_name'], i)
sources = new_target_dicts[name].get('sources', [])
new_sources = []
for pos in range(i, len(sources), targets_to_shard[t]):
new_sources.append(sources[pos])
new_target_dicts[name]['sources'] = new_sources
else:
new_target_dicts[t] = target_dicts[t]
# Shard dependencies.
for t in sorted(new_target_dicts):
for deptype in ('dependencies', 'dependencies_original'):
dependencies = copy.copy(new_target_dicts[t].get(deptype, []))
new_dependencies = []
for d in dependencies:
if d in targets_to_shard:
for i in range(targets_to_shard[d]):
new_dependencies.append(_ShardName(d, i))
# Gather the targets to shard, and how many pieces.
targets_to_shard = {}
for t in target_dicts:
shards = int(target_dicts[t].get("msvs_shard", 0))
if shards:
targets_to_shard[t] = shards
# Shard target_list.
new_target_list = []
for t in target_list:
if t in targets_to_shard:
for i in range(targets_to_shard[t]):
new_target_list.append(_ShardName(t, i))
else:
new_dependencies.append(d)
new_target_dicts[t][deptype] = new_dependencies
new_target_list.append(t)
# Shard target_dict.
new_target_dicts = {}
for t in target_dicts:
if t in targets_to_shard:
for i in range(targets_to_shard[t]):
name = _ShardName(t, i)
new_target_dicts[name] = copy.copy(target_dicts[t])
new_target_dicts[name]["target_name"] = _ShardName(
new_target_dicts[name]["target_name"], i
)
sources = new_target_dicts[name].get("sources", [])
new_sources = []
for pos in range(i, len(sources), targets_to_shard[t]):
new_sources.append(sources[pos])
new_target_dicts[name]["sources"] = new_sources
else:
new_target_dicts[t] = target_dicts[t]
# Shard dependencies.
for t in sorted(new_target_dicts):
for deptype in ("dependencies", "dependencies_original"):
dependencies = copy.copy(new_target_dicts[t].get(deptype, []))
new_dependencies = []
for d in dependencies:
if d in targets_to_shard:
for i in range(targets_to_shard[d]):
new_dependencies.append(_ShardName(d, i))
else:
new_dependencies.append(d)
new_target_dicts[t][deptype] = new_dependencies
return (new_target_list, new_target_dicts)
return (new_target_list, new_target_dicts)
def _GetPdbPath(target_dict, config_name, vars):
"""Returns the path to the PDB file that will be generated by a given
"""Returns the path to the PDB file that will be generated by a given
configuration.
The lookup proceeds as follows:
@ -144,30 +145,29 @@ def _GetPdbPath(target_dict, config_name, vars):
Returns:
The path of the corresponding PDB file.
"""
config = target_dict['configurations'][config_name]
msvs = config.setdefault('msvs_settings', {})
config = target_dict["configurations"][config_name]
msvs = config.setdefault("msvs_settings", {})
linker = msvs.get('VCLinkerTool', {})
linker = msvs.get("VCLinkerTool", {})
pdb_path = linker.get("ProgramDatabaseFile")
if pdb_path:
return pdb_path
variables = target_dict.get("variables", {})
pdb_path = variables.get("msvs_large_pdb_path", None)
if pdb_path:
return pdb_path
pdb_base = target_dict.get("product_name", target_dict["target_name"])
pdb_base = "%s.%s.pdb" % (pdb_base, TARGET_TYPE_EXT[target_dict["type"]])
pdb_path = vars["PRODUCT_DIR"] + "/" + pdb_base
pdb_path = linker.get('ProgramDatabaseFile')
if pdb_path:
return pdb_path
variables = target_dict.get('variables', {})
pdb_path = variables.get('msvs_large_pdb_path', None)
if pdb_path:
return pdb_path
pdb_base = target_dict.get('product_name', target_dict['target_name'])
pdb_base = '%s.%s.pdb' % (pdb_base, TARGET_TYPE_EXT[target_dict['type']])
pdb_path = vars['PRODUCT_DIR'] + '/' + pdb_base
return pdb_path
def InsertLargePdbShims(target_list, target_dicts, vars):
"""Insert a shim target that forces the linker to use 4KB pagesize PDBs.
"""Insert a shim target that forces the linker to use 4KB pagesize PDBs.
This is a workaround for targets with PDBs greater than 1GB in size, the
limit for the 1KB pagesize PDBs created by the linker by default.
@ -179,93 +179,93 @@ def InsertLargePdbShims(target_list, target_dicts, vars):
Returns:
Tuple of the shimmed version of the inputs.
"""
# Determine which targets need shimming.
targets_to_shim = []
for t in target_dicts:
target_dict = target_dicts[t]
# Determine which targets need shimming.
targets_to_shim = []
for t in target_dicts:
target_dict = target_dicts[t]
# We only want to shim targets that have msvs_large_pdb enabled.
if not int(target_dict.get('msvs_large_pdb', 0)):
continue
# This is intended for executable, shared_library and loadable_module
# targets where every configuration is set up to produce a PDB output.
# If any of these conditions is not true then the shim logic will fail
# below.
targets_to_shim.append(t)
# We only want to shim targets that have msvs_large_pdb enabled.
if not int(target_dict.get("msvs_large_pdb", 0)):
continue
# This is intended for executable, shared_library and loadable_module
# targets where every configuration is set up to produce a PDB output.
# If any of these conditions is not true then the shim logic will fail
# below.
targets_to_shim.append(t)
large_pdb_shim_cc = _GetLargePdbShimCcPath()
large_pdb_shim_cc = _GetLargePdbShimCcPath()
for t in targets_to_shim:
target_dict = target_dicts[t]
target_name = target_dict.get('target_name')
for t in targets_to_shim:
target_dict = target_dicts[t]
target_name = target_dict.get("target_name")
base_dict = _DeepCopySomeKeys(target_dict,
['configurations', 'default_configuration', 'toolset'])
base_dict = _DeepCopySomeKeys(
target_dict, ["configurations", "default_configuration", "toolset"]
)
# This is the dict for copying the source file (part of the GYP tree)
# to the intermediate directory of the project. This is necessary because
# we can't always build a relative path to the shim source file (on Windows
# GYP and the project may be on different drives), and Ninja hates absolute
# paths (it ends up generating the .obj and .obj.d alongside the source
# file, polluting GYPs tree).
copy_suffix = 'large_pdb_copy'
copy_target_name = target_name + '_' + copy_suffix
full_copy_target_name = _SuffixName(t, copy_suffix)
shim_cc_basename = os.path.basename(large_pdb_shim_cc)
shim_cc_dir = vars['SHARED_INTERMEDIATE_DIR'] + '/' + copy_target_name
shim_cc_path = shim_cc_dir + '/' + shim_cc_basename
copy_dict = copy.deepcopy(base_dict)
copy_dict['target_name'] = copy_target_name
copy_dict['type'] = 'none'
copy_dict['sources'] = [ large_pdb_shim_cc ]
copy_dict['copies'] = [{
'destination': shim_cc_dir,
'files': [ large_pdb_shim_cc ]
}]
# This is the dict for copying the source file (part of the GYP tree)
# to the intermediate directory of the project. This is necessary because
# we can't always build a relative path to the shim source file (on Windows
# GYP and the project may be on different drives), and Ninja hates absolute
# paths (it ends up generating the .obj and .obj.d alongside the source
# file, polluting GYPs tree).
copy_suffix = "large_pdb_copy"
copy_target_name = target_name + "_" + copy_suffix
full_copy_target_name = _SuffixName(t, copy_suffix)
shim_cc_basename = os.path.basename(large_pdb_shim_cc)
shim_cc_dir = vars["SHARED_INTERMEDIATE_DIR"] + "/" + copy_target_name
shim_cc_path = shim_cc_dir + "/" + shim_cc_basename
copy_dict = copy.deepcopy(base_dict)
copy_dict["target_name"] = copy_target_name
copy_dict["type"] = "none"
copy_dict["sources"] = [large_pdb_shim_cc]
copy_dict["copies"] = [
{"destination": shim_cc_dir, "files": [large_pdb_shim_cc]}
]
# This is the dict for the PDB generating shim target. It depends on the
# copy target.
shim_suffix = 'large_pdb_shim'
shim_target_name = target_name + '_' + shim_suffix
full_shim_target_name = _SuffixName(t, shim_suffix)
shim_dict = copy.deepcopy(base_dict)
shim_dict['target_name'] = shim_target_name
shim_dict['type'] = 'static_library'
shim_dict['sources'] = [ shim_cc_path ]
shim_dict['dependencies'] = [ full_copy_target_name ]
# This is the dict for the PDB generating shim target. It depends on the
# copy target.
shim_suffix = "large_pdb_shim"
shim_target_name = target_name + "_" + shim_suffix
full_shim_target_name = _SuffixName(t, shim_suffix)
shim_dict = copy.deepcopy(base_dict)
shim_dict["target_name"] = shim_target_name
shim_dict["type"] = "static_library"
shim_dict["sources"] = [shim_cc_path]
shim_dict["dependencies"] = [full_copy_target_name]
# Set up the shim to output its PDB to the same location as the final linker
# target.
for config_name, config in shim_dict.get('configurations').items():
pdb_path = _GetPdbPath(target_dict, config_name, vars)
# Set up the shim to output its PDB to the same location as the final linker
# target.
for config_name, config in shim_dict.get("configurations").items():
pdb_path = _GetPdbPath(target_dict, config_name, vars)
# A few keys that we don't want to propagate.
for key in ['msvs_precompiled_header', 'msvs_precompiled_source', 'test']:
config.pop(key, None)
# A few keys that we don't want to propagate.
for key in ["msvs_precompiled_header", "msvs_precompiled_source", "test"]:
config.pop(key, None)
msvs = config.setdefault('msvs_settings', {})
msvs = config.setdefault("msvs_settings", {})
# Update the compiler directives in the shim target.
compiler = msvs.setdefault('VCCLCompilerTool', {})
compiler['DebugInformationFormat'] = '3'
compiler['ProgramDataBaseFileName'] = pdb_path
# Update the compiler directives in the shim target.
compiler = msvs.setdefault("VCCLCompilerTool", {})
compiler["DebugInformationFormat"] = "3"
compiler["ProgramDataBaseFileName"] = pdb_path
# Set the explicit PDB path in the appropriate configuration of the
# original target.
config = target_dict['configurations'][config_name]
msvs = config.setdefault('msvs_settings', {})
linker = msvs.setdefault('VCLinkerTool', {})
linker['GenerateDebugInformation'] = 'true'
linker['ProgramDatabaseFile'] = pdb_path
# Set the explicit PDB path in the appropriate configuration of the
# original target.
config = target_dict["configurations"][config_name]
msvs = config.setdefault("msvs_settings", {})
linker = msvs.setdefault("VCLinkerTool", {})
linker["GenerateDebugInformation"] = "true"
linker["ProgramDatabaseFile"] = pdb_path
# Add the new targets. They must go to the beginning of the list so that
# the dependency generation works as expected in ninja.
target_list.insert(0, full_copy_target_name)
target_list.insert(0, full_shim_target_name)
target_dicts[full_copy_target_name] = copy_dict
target_dicts[full_shim_target_name] = shim_dict
# Add the new targets. They must go to the beginning of the list so that
# the dependency generation works as expected in ninja.
target_list.insert(0, full_copy_target_name)
target_list.insert(0, full_shim_target_name)
target_dicts[full_copy_target_name] = copy_dict
target_dicts[full_shim_target_name] = shim_dict
# Update the original target to depend on the shim target.
target_dict.setdefault('dependencies', []).append(full_shim_target_name)
# Update the original target to depend on the shim target.
target_dict.setdefault("dependencies", []).append(full_shim_target_name)
return (target_list, target_dicts)
return (target_list, target_dicts)

View file

@ -9,139 +9,152 @@ import os
import re
import subprocess
import sys
import gyp
import glob
PY3 = bytes != str
def JoinPath(*args):
return os.path.normpath(os.path.join(*args))
return os.path.normpath(os.path.join(*args))
class VisualStudioVersion(object):
"""Information regarding a version of Visual Studio."""
"""Information regarding a version of Visual Studio."""
def __init__(self, short_name, description,
solution_version, project_version, flat_sln, uses_vcxproj,
path, sdk_based, default_toolset=None, compatible_sdks=None):
self.short_name = short_name
self.description = description
self.solution_version = solution_version
self.project_version = project_version
self.flat_sln = flat_sln
self.uses_vcxproj = uses_vcxproj
self.path = path
self.sdk_based = sdk_based
self.default_toolset = default_toolset
compatible_sdks = compatible_sdks or []
compatible_sdks.sort(key=lambda v: float(v.replace('v', '')), reverse=True)
self.compatible_sdks = compatible_sdks
def __init__(
self,
short_name,
description,
solution_version,
project_version,
flat_sln,
uses_vcxproj,
path,
sdk_based,
default_toolset=None,
compatible_sdks=None,
):
self.short_name = short_name
self.description = description
self.solution_version = solution_version
self.project_version = project_version
self.flat_sln = flat_sln
self.uses_vcxproj = uses_vcxproj
self.path = path
self.sdk_based = sdk_based
self.default_toolset = default_toolset
compatible_sdks = compatible_sdks or []
compatible_sdks.sort(key=lambda v: float(v.replace("v", "")), reverse=True)
self.compatible_sdks = compatible_sdks
def ShortName(self):
return self.short_name
def ShortName(self):
return self.short_name
def Description(self):
"""Get the full description of the version."""
return self.description
def Description(self):
"""Get the full description of the version."""
return self.description
def SolutionVersion(self):
"""Get the version number of the sln files."""
return self.solution_version
def SolutionVersion(self):
"""Get the version number of the sln files."""
return self.solution_version
def ProjectVersion(self):
"""Get the version number of the vcproj or vcxproj files."""
return self.project_version
def ProjectVersion(self):
"""Get the version number of the vcproj or vcxproj files."""
return self.project_version
def FlatSolution(self):
return self.flat_sln
def FlatSolution(self):
return self.flat_sln
def UsesVcxproj(self):
"""Returns true if this version uses a vcxproj file."""
return self.uses_vcxproj
def UsesVcxproj(self):
"""Returns true if this version uses a vcxproj file."""
return self.uses_vcxproj
def ProjectExtension(self):
"""Returns the file extension for the project."""
return self.uses_vcxproj and '.vcxproj' or '.vcproj'
def ProjectExtension(self):
"""Returns the file extension for the project."""
return self.uses_vcxproj and ".vcxproj" or ".vcproj"
def Path(self):
"""Returns the path to Visual Studio installation."""
return self.path
def Path(self):
"""Returns the path to Visual Studio installation."""
return self.path
def ToolPath(self, tool):
"""Returns the path to a given compiler tool. """
return os.path.normpath(os.path.join(self.path, "VC/bin", tool))
def ToolPath(self, tool):
"""Returns the path to a given compiler tool. """
return os.path.normpath(os.path.join(self.path, "VC/bin", tool))
def DefaultToolset(self):
"""Returns the msbuild toolset version that will be used in the absence
def DefaultToolset(self):
"""Returns the msbuild toolset version that will be used in the absence
of a user override."""
return self.default_toolset
return self.default_toolset
def _SetupScriptInternal(self, target_arch):
"""Returns a command (with arguments) to be used to set up the
def _SetupScriptInternal(self, target_arch):
"""Returns a command (with arguments) to be used to set up the
environment."""
assert target_arch in ('x86', 'x64'), "target_arch not supported"
# If WindowsSDKDir is set and SetEnv.Cmd exists then we are using the
# depot_tools build tools and should run SetEnv.Cmd to set up the
# environment. The check for WindowsSDKDir alone is not sufficient because
# this is set by running vcvarsall.bat.
sdk_dir = os.environ.get('WindowsSDKDir', '')
setup_path = JoinPath(sdk_dir, 'Bin', 'SetEnv.Cmd')
if self.sdk_based and sdk_dir and os.path.exists(setup_path):
return [setup_path, '/' + target_arch]
assert target_arch in ("x86", "x64"), "target_arch not supported"
# If WindowsSDKDir is set and SetEnv.Cmd exists then we are using the
# depot_tools build tools and should run SetEnv.Cmd to set up the
# environment. The check for WindowsSDKDir alone is not sufficient because
# this is set by running vcvarsall.bat.
sdk_dir = os.environ.get("WindowsSDKDir", "")
setup_path = JoinPath(sdk_dir, "Bin", "SetEnv.Cmd")
if self.sdk_based and sdk_dir and os.path.exists(setup_path):
return [setup_path, "/" + target_arch]
is_host_arch_x64 = (
os.environ.get('PROCESSOR_ARCHITECTURE') == 'AMD64' or
os.environ.get('PROCESSOR_ARCHITEW6432') == 'AMD64'
)
is_host_arch_x64 = (
os.environ.get("PROCESSOR_ARCHITECTURE") == "AMD64"
or os.environ.get("PROCESSOR_ARCHITEW6432") == "AMD64"
)
# For VS2017 (and newer) it's fairly easy
if self.short_name >= '2017':
script_path = JoinPath(self.path,
'VC', 'Auxiliary', 'Build', 'vcvarsall.bat')
# For VS2017 (and newer) it's fairly easy
if self.short_name >= "2017":
script_path = JoinPath(
self.path, "VC", "Auxiliary", "Build", "vcvarsall.bat"
)
# Always use a native executable, cross-compiling if necessary.
host_arch = 'amd64' if is_host_arch_x64 else 'x86'
msvc_target_arch = 'amd64' if target_arch == 'x64' else 'x86'
arg = host_arch
if host_arch != msvc_target_arch:
arg += '_' + msvc_target_arch
# Always use a native executable, cross-compiling if necessary.
host_arch = "amd64" if is_host_arch_x64 else "x86"
msvc_target_arch = "amd64" if target_arch == "x64" else "x86"
arg = host_arch
if host_arch != msvc_target_arch:
arg += "_" + msvc_target_arch
return [script_path, arg]
return [script_path, arg]
# We try to find the best version of the env setup batch.
vcvarsall = JoinPath(self.path, 'VC', 'vcvarsall.bat')
if target_arch == 'x86':
if self.short_name >= '2013' and self.short_name[-1] != 'e' and \
is_host_arch_x64:
# VS2013 and later, non-Express have a x64-x86 cross that we want
# to prefer.
return [vcvarsall, 'amd64_x86']
else:
# Otherwise, the standard x86 compiler. We don't use VC/vcvarsall.bat
# for x86 because vcvarsall calls vcvars32, which it can only find if
# VS??COMNTOOLS is set, which isn't guaranteed.
return [JoinPath(self.path, 'Common7', 'Tools', 'vsvars32.bat')]
elif target_arch == 'x64':
arg = 'x86_amd64'
# Use the 64-on-64 compiler if we're not using an express edition and
# we're running on a 64bit OS.
if self.short_name[-1] != 'e' and is_host_arch_x64:
arg = 'amd64'
return [vcvarsall, arg]
# We try to find the best version of the env setup batch.
vcvarsall = JoinPath(self.path, "VC", "vcvarsall.bat")
if target_arch == "x86":
if (
self.short_name >= "2013"
and self.short_name[-1] != "e"
and is_host_arch_x64
):
# VS2013 and later, non-Express have a x64-x86 cross that we want
# to prefer.
return [vcvarsall, "amd64_x86"]
else:
# Otherwise, the standard x86 compiler. We don't use VC/vcvarsall.bat
# for x86 because vcvarsall calls vcvars32, which it can only find if
# VS??COMNTOOLS is set, which isn't guaranteed.
return [JoinPath(self.path, "Common7", "Tools", "vsvars32.bat")]
elif target_arch == "x64":
arg = "x86_amd64"
# Use the 64-on-64 compiler if we're not using an express edition and
# we're running on a 64bit OS.
if self.short_name[-1] != "e" and is_host_arch_x64:
arg = "amd64"
return [vcvarsall, arg]
def SetupScript(self, target_arch):
script_data = self._SetupScriptInternal(target_arch)
script_path = script_data[0]
if not os.path.exists(script_path):
raise Exception('%s is missing - make sure VC++ tools are installed.' %
script_path)
return script_data
def SetupScript(self, target_arch):
script_data = self._SetupScriptInternal(target_arch)
script_path = script_data[0]
if not os.path.exists(script_path):
raise Exception(
"%s is missing - make sure VC++ tools are installed." % script_path
)
return script_data
def _RegistryQueryBase(sysdir, key, value):
"""Use reg.exe to read a particular key.
"""Use reg.exe to read a particular key.
While ideally we might use the win32 module, we would like gyp to be
python neutral, so for instance cygwin python lacks this module.
@ -153,28 +166,27 @@ def _RegistryQueryBase(sysdir, key, value):
Return:
stdout from reg.exe, or None for failure.
"""
# Skip if not on Windows or Python Win32 setup issue
if sys.platform not in ('win32', 'cygwin'):
return None
# Setup params to pass to and attempt to launch reg.exe
cmd = [os.path.join(os.environ.get('WINDIR', ''), sysdir, 'reg.exe'),
'query', key]
if value:
cmd.extend(['/v', value])
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# Obtain the stdout from reg.exe, reading to the end so p.returncode is valid
# Note that the error text may be in [1] in some cases
text = p.communicate()[0]
if PY3:
text = text.decode('utf-8')
# Check return code from reg.exe; officially 0==success and 1==error
if p.returncode:
return None
return text
# Skip if not on Windows or Python Win32 setup issue
if sys.platform not in ("win32", "cygwin"):
return None
# Setup params to pass to and attempt to launch reg.exe
cmd = [os.path.join(os.environ.get("WINDIR", ""), sysdir, "reg.exe"), "query", key]
if value:
cmd.extend(["/v", value])
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# Obtain the stdout from reg.exe, reading to the end so p.returncode is valid
# Note that the error text may be in [1] in some cases
text = p.communicate()[0]
if PY3:
text = text.decode("utf-8")
# Check return code from reg.exe; officially 0==success and 1==error
if p.returncode:
return None
return text
def _RegistryQuery(key, value=None):
r"""Use reg.exe to read a particular key through _RegistryQueryBase.
r"""Use reg.exe to read a particular key through _RegistryQueryBase.
First tries to launch from %WinDir%\Sysnative to avoid WoW64 redirection. If
that fails, it falls back to System32. Sysnative is available on Vista and
@ -190,19 +202,19 @@ def _RegistryQuery(key, value=None):
Return:
stdout from reg.exe, or None for failure.
"""
text = None
try:
text = _RegistryQueryBase('Sysnative', key, value)
except OSError as e:
if e.errno == errno.ENOENT:
text = _RegistryQueryBase('System32', key, value)
else:
raise
return text
text = None
try:
text = _RegistryQueryBase("Sysnative", key, value)
except OSError as e:
if e.errno == errno.ENOENT:
text = _RegistryQueryBase("System32", key, value)
else:
raise
return text
def _RegistryGetValueUsingWinReg(key, value):
"""Use the _winreg module to obtain the value of a registry key.
"""Use the _winreg module to obtain the value of a registry key.
Args:
key: The registry key.
@ -211,24 +223,24 @@ def _RegistryGetValueUsingWinReg(key, value):
contents of the registry key's value, or None on failure. Throws
ImportError if _winreg is unavailable.
"""
try:
# Python 2
from _winreg import HKEY_LOCAL_MACHINE, OpenKey, QueryValueEx
except ImportError:
# Python 3
from winreg import HKEY_LOCAL_MACHINE, OpenKey, QueryValueEx
try:
# Python 2
from _winreg import HKEY_LOCAL_MACHINE, OpenKey, QueryValueEx
except ImportError:
# Python 3
from winreg import HKEY_LOCAL_MACHINE, OpenKey, QueryValueEx
try:
root, subkey = key.split('\\', 1)
assert root == 'HKLM' # Only need HKLM for now.
with OpenKey(HKEY_LOCAL_MACHINE, subkey) as hkey:
return QueryValueEx(hkey, value)[0]
except WindowsError:
return None
try:
root, subkey = key.split("\\", 1)
assert root == "HKLM" # Only need HKLM for now.
with OpenKey(HKEY_LOCAL_MACHINE, subkey) as hkey:
return QueryValueEx(hkey, value)[0]
except WindowsError:
return None
def _RegistryGetValue(key, value):
"""Use _winreg or reg.exe to obtain the value of a registry key.
"""Use _winreg or reg.exe to obtain the value of a registry key.
Using _winreg is preferable because it solves an issue on some corporate
environments where access to reg.exe is locked down. However, we still need
@ -241,161 +253,187 @@ def _RegistryGetValue(key, value):
Return:
contents of the registry key's value, or None on failure.
"""
try:
return _RegistryGetValueUsingWinReg(key, value)
except ImportError:
pass
try:
return _RegistryGetValueUsingWinReg(key, value)
except ImportError:
pass
# Fallback to reg.exe if we fail to import _winreg.
text = _RegistryQuery(key, value)
if not text:
return None
# Extract value.
match = re.search(r'REG_\w+\s+([^\r]+)\r\n', text)
if not match:
return None
return match.group(1)
# Fallback to reg.exe if we fail to import _winreg.
text = _RegistryQuery(key, value)
if not text:
return None
# Extract value.
match = re.search(r"REG_\w+\s+([^\r]+)\r\n", text)
if not match:
return None
return match.group(1)
def _CreateVersion(name, path, sdk_based=False):
"""Sets up MSVS project generation.
"""Sets up MSVS project generation.
Setup is based off the GYP_MSVS_VERSION environment variable or whatever is
autodetected if GYP_MSVS_VERSION is not explicitly specified. If a version is
passed in that doesn't match a value in versions python will throw a error.
"""
if path:
path = os.path.normpath(path)
versions = {
'2019': VisualStudioVersion('2019',
'Visual Studio 2019',
solution_version='12.00',
project_version='16.0',
flat_sln=False,
uses_vcxproj=True,
path=path,
sdk_based=sdk_based,
default_toolset='v142',
compatible_sdks=['v8.1', 'v10.0']),
'2017': VisualStudioVersion('2017',
'Visual Studio 2017',
solution_version='12.00',
project_version='15.0',
flat_sln=False,
uses_vcxproj=True,
path=path,
sdk_based=sdk_based,
default_toolset='v141',
compatible_sdks=['v8.1', 'v10.0']),
'2015': VisualStudioVersion('2015',
'Visual Studio 2015',
solution_version='12.00',
project_version='14.0',
flat_sln=False,
uses_vcxproj=True,
path=path,
sdk_based=sdk_based,
default_toolset='v140'),
'2013': VisualStudioVersion('2013',
'Visual Studio 2013',
solution_version='13.00',
project_version='12.0',
flat_sln=False,
uses_vcxproj=True,
path=path,
sdk_based=sdk_based,
default_toolset='v120'),
'2013e': VisualStudioVersion('2013e',
'Visual Studio 2013',
solution_version='13.00',
project_version='12.0',
flat_sln=True,
uses_vcxproj=True,
path=path,
sdk_based=sdk_based,
default_toolset='v120'),
'2012': VisualStudioVersion('2012',
'Visual Studio 2012',
solution_version='12.00',
project_version='4.0',
flat_sln=False,
uses_vcxproj=True,
path=path,
sdk_based=sdk_based,
default_toolset='v110'),
'2012e': VisualStudioVersion('2012e',
'Visual Studio 2012',
solution_version='12.00',
project_version='4.0',
flat_sln=True,
uses_vcxproj=True,
path=path,
sdk_based=sdk_based,
default_toolset='v110'),
'2010': VisualStudioVersion('2010',
'Visual Studio 2010',
solution_version='11.00',
project_version='4.0',
flat_sln=False,
uses_vcxproj=True,
path=path,
sdk_based=sdk_based),
'2010e': VisualStudioVersion('2010e',
'Visual C++ Express 2010',
solution_version='11.00',
project_version='4.0',
flat_sln=True,
uses_vcxproj=True,
path=path,
sdk_based=sdk_based),
'2008': VisualStudioVersion('2008',
'Visual Studio 2008',
solution_version='10.00',
project_version='9.00',
flat_sln=False,
uses_vcxproj=False,
path=path,
sdk_based=sdk_based),
'2008e': VisualStudioVersion('2008e',
'Visual Studio 2008',
solution_version='10.00',
project_version='9.00',
flat_sln=True,
uses_vcxproj=False,
path=path,
sdk_based=sdk_based),
'2005': VisualStudioVersion('2005',
'Visual Studio 2005',
solution_version='9.00',
project_version='8.00',
flat_sln=False,
uses_vcxproj=False,
path=path,
sdk_based=sdk_based),
'2005e': VisualStudioVersion('2005e',
'Visual Studio 2005',
solution_version='9.00',
project_version='8.00',
flat_sln=True,
uses_vcxproj=False,
path=path,
sdk_based=sdk_based),
}
return versions[str(name)]
if path:
path = os.path.normpath(path)
versions = {
"2019": VisualStudioVersion(
"2019",
"Visual Studio 2019",
solution_version="12.00",
project_version="16.0",
flat_sln=False,
uses_vcxproj=True,
path=path,
sdk_based=sdk_based,
default_toolset="v142",
compatible_sdks=["v8.1", "v10.0"],
),
"2017": VisualStudioVersion(
"2017",
"Visual Studio 2017",
solution_version="12.00",
project_version="15.0",
flat_sln=False,
uses_vcxproj=True,
path=path,
sdk_based=sdk_based,
default_toolset="v141",
compatible_sdks=["v8.1", "v10.0"],
),
"2015": VisualStudioVersion(
"2015",
"Visual Studio 2015",
solution_version="12.00",
project_version="14.0",
flat_sln=False,
uses_vcxproj=True,
path=path,
sdk_based=sdk_based,
default_toolset="v140",
),
"2013": VisualStudioVersion(
"2013",
"Visual Studio 2013",
solution_version="13.00",
project_version="12.0",
flat_sln=False,
uses_vcxproj=True,
path=path,
sdk_based=sdk_based,
default_toolset="v120",
),
"2013e": VisualStudioVersion(
"2013e",
"Visual Studio 2013",
solution_version="13.00",
project_version="12.0",
flat_sln=True,
uses_vcxproj=True,
path=path,
sdk_based=sdk_based,
default_toolset="v120",
),
"2012": VisualStudioVersion(
"2012",
"Visual Studio 2012",
solution_version="12.00",
project_version="4.0",
flat_sln=False,
uses_vcxproj=True,
path=path,
sdk_based=sdk_based,
default_toolset="v110",
),
"2012e": VisualStudioVersion(
"2012e",
"Visual Studio 2012",
solution_version="12.00",
project_version="4.0",
flat_sln=True,
uses_vcxproj=True,
path=path,
sdk_based=sdk_based,
default_toolset="v110",
),
"2010": VisualStudioVersion(
"2010",
"Visual Studio 2010",
solution_version="11.00",
project_version="4.0",
flat_sln=False,
uses_vcxproj=True,
path=path,
sdk_based=sdk_based,
),
"2010e": VisualStudioVersion(
"2010e",
"Visual C++ Express 2010",
solution_version="11.00",
project_version="4.0",
flat_sln=True,
uses_vcxproj=True,
path=path,
sdk_based=sdk_based,
),
"2008": VisualStudioVersion(
"2008",
"Visual Studio 2008",
solution_version="10.00",
project_version="9.00",
flat_sln=False,
uses_vcxproj=False,
path=path,
sdk_based=sdk_based,
),
"2008e": VisualStudioVersion(
"2008e",
"Visual Studio 2008",
solution_version="10.00",
project_version="9.00",
flat_sln=True,
uses_vcxproj=False,
path=path,
sdk_based=sdk_based,
),
"2005": VisualStudioVersion(
"2005",
"Visual Studio 2005",
solution_version="9.00",
project_version="8.00",
flat_sln=False,
uses_vcxproj=False,
path=path,
sdk_based=sdk_based,
),
"2005e": VisualStudioVersion(
"2005e",
"Visual Studio 2005",
solution_version="9.00",
project_version="8.00",
flat_sln=True,
uses_vcxproj=False,
path=path,
sdk_based=sdk_based,
),
}
return versions[str(name)]
def _ConvertToCygpath(path):
"""Convert to cygwin path if we are using cygwin."""
if sys.platform == 'cygwin':
p = subprocess.Popen(['cygpath', path], stdout=subprocess.PIPE)
path = p.communicate()[0].strip()
if PY3:
path = path.decode('utf-8')
return path
"""Convert to cygwin path if we are using cygwin."""
if sys.platform == "cygwin":
p = subprocess.Popen(["cygpath", path], stdout=subprocess.PIPE)
path = p.communicate()[0].strip()
if PY3:
path = path.decode("utf-8")
return path
def _DetectVisualStudioVersions(versions_to_check, force_express):
"""Collect the list of installed visual studio versions.
"""Collect the list of installed visual studio versions.
Returns:
A list of visual studio versions installed in descending order of
@ -412,105 +450,122 @@ def _DetectVisualStudioVersions(versions_to_check, force_express):
2019 - Visual Studio 2019 (16)
Where (e) is e for express editions of MSVS and blank otherwise.
"""
version_to_year = {
'8.0': '2005',
'9.0': '2008',
'10.0': '2010',
'11.0': '2012',
'12.0': '2013',
'14.0': '2015',
'15.0': '2017',
'16.0': '2019',
}
versions = []
for version in versions_to_check:
# Old method of searching for which VS version is installed
# We don't use the 2010-encouraged-way because we also want to get the
# path to the binaries, which it doesn't offer.
keys = [r'HKLM\Software\Microsoft\VisualStudio\%s' % version,
r'HKLM\Software\Wow6432Node\Microsoft\VisualStudio\%s' % version,
r'HKLM\Software\Microsoft\VCExpress\%s' % version,
r'HKLM\Software\Wow6432Node\Microsoft\VCExpress\%s' % version]
for index in range(len(keys)):
path = _RegistryGetValue(keys[index], 'InstallDir')
if not path:
continue
path = _ConvertToCygpath(path)
# Check for full.
full_path = os.path.join(path, 'devenv.exe')
express_path = os.path.join(path, '*express.exe')
if not force_express and os.path.exists(full_path):
# Add this one.
versions.append(_CreateVersion(version_to_year[version],
os.path.join(path, '..', '..')))
# Check for express.
elif glob.glob(express_path):
# Add this one.
versions.append(_CreateVersion(version_to_year[version] + 'e',
os.path.join(path, '..', '..')))
version_to_year = {
"8.0": "2005",
"9.0": "2008",
"10.0": "2010",
"11.0": "2012",
"12.0": "2013",
"14.0": "2015",
"15.0": "2017",
"16.0": "2019",
}
versions = []
for version in versions_to_check:
# Old method of searching for which VS version is installed
# We don't use the 2010-encouraged-way because we also want to get the
# path to the binaries, which it doesn't offer.
keys = [
r"HKLM\Software\Microsoft\VisualStudio\%s" % version,
r"HKLM\Software\Wow6432Node\Microsoft\VisualStudio\%s" % version,
r"HKLM\Software\Microsoft\VCExpress\%s" % version,
r"HKLM\Software\Wow6432Node\Microsoft\VCExpress\%s" % version,
]
for index in range(len(keys)):
path = _RegistryGetValue(keys[index], "InstallDir")
if not path:
continue
path = _ConvertToCygpath(path)
# Check for full.
full_path = os.path.join(path, "devenv.exe")
express_path = os.path.join(path, "*express.exe")
if not force_express and os.path.exists(full_path):
# Add this one.
versions.append(
_CreateVersion(
version_to_year[version], os.path.join(path, "..", "..")
)
)
# Check for express.
elif glob.glob(express_path):
# Add this one.
versions.append(
_CreateVersion(
version_to_year[version] + "e", os.path.join(path, "..", "..")
)
)
# The old method above does not work when only SDK is installed.
keys = [r'HKLM\Software\Microsoft\VisualStudio\SxS\VC7',
r'HKLM\Software\Wow6432Node\Microsoft\VisualStudio\SxS\VC7',
r'HKLM\Software\Microsoft\VisualStudio\SxS\VS7',
r'HKLM\Software\Wow6432Node\Microsoft\VisualStudio\SxS\VS7']
for index in range(len(keys)):
path = _RegistryGetValue(keys[index], version)
if not path:
continue
path = _ConvertToCygpath(path)
if version == '15.0':
if os.path.exists(path):
versions.append(_CreateVersion('2017', path))
elif version != '14.0': # There is no Express edition for 2015.
versions.append(_CreateVersion(version_to_year[version] + 'e',
os.path.join(path, '..'), sdk_based=True))
# The old method above does not work when only SDK is installed.
keys = [
r"HKLM\Software\Microsoft\VisualStudio\SxS\VC7",
r"HKLM\Software\Wow6432Node\Microsoft\VisualStudio\SxS\VC7",
r"HKLM\Software\Microsoft\VisualStudio\SxS\VS7",
r"HKLM\Software\Wow6432Node\Microsoft\VisualStudio\SxS\VS7",
]
for index in range(len(keys)):
path = _RegistryGetValue(keys[index], version)
if not path:
continue
path = _ConvertToCygpath(path)
if version == "15.0":
if os.path.exists(path):
versions.append(_CreateVersion("2017", path))
elif version != "14.0": # There is no Express edition for 2015.
versions.append(
_CreateVersion(
version_to_year[version] + "e",
os.path.join(path, ".."),
sdk_based=True,
)
)
return versions
return versions
def SelectVisualStudioVersion(version='auto', allow_fallback=True):
"""Select which version of Visual Studio projects to generate.
def SelectVisualStudioVersion(version="auto", allow_fallback=True):
"""Select which version of Visual Studio projects to generate.
Arguments:
version: Hook to allow caller to force a particular version (vs auto).
Returns:
An object representing a visual studio project format version.
"""
# In auto mode, check environment variable for override.
if version == 'auto':
version = os.environ.get('GYP_MSVS_VERSION', 'auto')
version_map = {
'auto': ('16.0', '15.0', '14.0', '12.0', '10.0', '9.0', '8.0', '11.0'),
'2005': ('8.0',),
'2005e': ('8.0',),
'2008': ('9.0',),
'2008e': ('9.0',),
'2010': ('10.0',),
'2010e': ('10.0',),
'2012': ('11.0',),
'2012e': ('11.0',),
'2013': ('12.0',),
'2013e': ('12.0',),
'2015': ('14.0',),
'2017': ('15.0',),
'2019': ('16.0',),
}
override_path = os.environ.get('GYP_MSVS_OVERRIDE_PATH')
if override_path:
msvs_version = os.environ.get('GYP_MSVS_VERSION')
if not msvs_version:
raise ValueError('GYP_MSVS_OVERRIDE_PATH requires GYP_MSVS_VERSION to be '
'set to a particular version (e.g. 2010e).')
return _CreateVersion(msvs_version, override_path, sdk_based=True)
version = str(version)
versions = _DetectVisualStudioVersions(version_map[version], 'e' in version)
if not versions:
if not allow_fallback:
raise ValueError('Could not locate Visual Studio installation.')
if version == 'auto':
# Default to 2005 if we couldn't find anything
return _CreateVersion('2005', None)
else:
return _CreateVersion(version, None)
return versions[0]
# In auto mode, check environment variable for override.
if version == "auto":
version = os.environ.get("GYP_MSVS_VERSION", "auto")
version_map = {
"auto": ("16.0", "15.0", "14.0", "12.0", "10.0", "9.0", "8.0", "11.0"),
"2005": ("8.0",),
"2005e": ("8.0",),
"2008": ("9.0",),
"2008e": ("9.0",),
"2010": ("10.0",),
"2010e": ("10.0",),
"2012": ("11.0",),
"2012e": ("11.0",),
"2013": ("12.0",),
"2013e": ("12.0",),
"2015": ("14.0",),
"2017": ("15.0",),
"2019": ("16.0",),
}
override_path = os.environ.get("GYP_MSVS_OVERRIDE_PATH")
if override_path:
msvs_version = os.environ.get("GYP_MSVS_VERSION")
if not msvs_version:
raise ValueError(
"GYP_MSVS_OVERRIDE_PATH requires GYP_MSVS_VERSION to be "
"set to a particular version (e.g. 2010e)."
)
return _CreateVersion(msvs_version, override_path, sdk_based=True)
version = str(version)
versions = _DetectVisualStudioVersions(version_map[version], "e" in version)
if not versions:
if not allow_fallback:
raise ValueError("Could not locate Visual Studio installation.")
if version == "auto":
# Default to 2005 if we couldn't find anything
return _CreateVersion("2005", None)
else:
return _CreateVersion(version, None)
return versions[0]

File diff suppressed because it is too large Load diff

View file

@ -11,9 +11,9 @@ import sys
import subprocess
try:
from collections.abc import MutableSet
from collections.abc import MutableSet
except ImportError:
from collections import MutableSet
from collections import MutableSet
PY3 = bytes != str
@ -21,191 +21,197 @@ PY3 = bytes != str
# A minimal memoizing decorator. It'll blow up if the args aren't immutable,
# among other "problems".
class memoize(object):
def __init__(self, func):
self.func = func
self.cache = {}
def __call__(self, *args):
try:
return self.cache[args]
except KeyError:
result = self.func(*args)
self.cache[args] = result
return result
def __init__(self, func):
self.func = func
self.cache = {}
def __call__(self, *args):
try:
return self.cache[args]
except KeyError:
result = self.func(*args)
self.cache[args] = result
return result
class GypError(Exception):
"""Error class representing an error, which is to be presented
"""Error class representing an error, which is to be presented
to the user. The main entry point will catch and display this.
"""
pass
pass
def ExceptionAppend(e, msg):
"""Append a message to the given exception's message."""
if not e.args:
e.args = (msg,)
elif len(e.args) == 1:
e.args = (str(e.args[0]) + ' ' + msg,)
else:
e.args = (str(e.args[0]) + ' ' + msg,) + e.args[1:]
"""Append a message to the given exception's message."""
if not e.args:
e.args = (msg,)
elif len(e.args) == 1:
e.args = (str(e.args[0]) + " " + msg,)
else:
e.args = (str(e.args[0]) + " " + msg,) + e.args[1:]
def FindQualifiedTargets(target, qualified_list):
"""
"""
Given a list of qualified targets, return the qualified targets for the
specified |target|.
"""
return [t for t in qualified_list if ParseQualifiedTarget(t)[1] == target]
return [t for t in qualified_list if ParseQualifiedTarget(t)[1] == target]
def ParseQualifiedTarget(target):
# Splits a qualified target into a build file, target name and toolset.
# Splits a qualified target into a build file, target name and toolset.
# NOTE: rsplit is used to disambiguate the Windows drive letter separator.
target_split = target.rsplit(':', 1)
if len(target_split) == 2:
[build_file, target] = target_split
else:
build_file = None
# NOTE: rsplit is used to disambiguate the Windows drive letter separator.
target_split = target.rsplit(":", 1)
if len(target_split) == 2:
[build_file, target] = target_split
else:
build_file = None
target_split = target.rsplit('#', 1)
if len(target_split) == 2:
[target, toolset] = target_split
else:
toolset = None
target_split = target.rsplit("#", 1)
if len(target_split) == 2:
[target, toolset] = target_split
else:
toolset = None
return [build_file, target, toolset]
return [build_file, target, toolset]
def ResolveTarget(build_file, target, toolset):
# This function resolves a target into a canonical form:
# - a fully defined build file, either absolute or relative to the current
# directory
# - a target name
# - a toolset
#
# build_file is the file relative to which 'target' is defined.
# target is the qualified target.
# toolset is the default toolset for that target.
[parsed_build_file, target, parsed_toolset] = ParseQualifiedTarget(target)
# This function resolves a target into a canonical form:
# - a fully defined build file, either absolute or relative to the current
# directory
# - a target name
# - a toolset
#
# build_file is the file relative to which 'target' is defined.
# target is the qualified target.
# toolset is the default toolset for that target.
[parsed_build_file, target, parsed_toolset] = ParseQualifiedTarget(target)
if parsed_build_file:
if build_file:
# If a relative path, parsed_build_file is relative to the directory
# containing build_file. If build_file is not in the current directory,
# parsed_build_file is not a usable path as-is. Resolve it by
# interpreting it as relative to build_file. If parsed_build_file is
# absolute, it is usable as a path regardless of the current directory,
# and os.path.join will return it as-is.
build_file = os.path.normpath(os.path.join(os.path.dirname(build_file),
parsed_build_file))
# Further (to handle cases like ../cwd), make it relative to cwd)
if not os.path.isabs(build_file):
build_file = RelativePath(build_file, '.')
else:
build_file = parsed_build_file
if parsed_build_file:
if build_file:
# If a relative path, parsed_build_file is relative to the directory
# containing build_file. If build_file is not in the current directory,
# parsed_build_file is not a usable path as-is. Resolve it by
# interpreting it as relative to build_file. If parsed_build_file is
# absolute, it is usable as a path regardless of the current directory,
# and os.path.join will return it as-is.
build_file = os.path.normpath(
os.path.join(os.path.dirname(build_file), parsed_build_file)
)
# Further (to handle cases like ../cwd), make it relative to cwd)
if not os.path.isabs(build_file):
build_file = RelativePath(build_file, ".")
else:
build_file = parsed_build_file
if parsed_toolset:
toolset = parsed_toolset
if parsed_toolset:
toolset = parsed_toolset
return [build_file, target, toolset]
return [build_file, target, toolset]
def BuildFile(fully_qualified_target):
# Extracts the build file from the fully qualified target.
return ParseQualifiedTarget(fully_qualified_target)[0]
# Extracts the build file from the fully qualified target.
return ParseQualifiedTarget(fully_qualified_target)[0]
def GetEnvironFallback(var_list, default):
"""Look up a key in the environment, with fallback to secondary keys
"""Look up a key in the environment, with fallback to secondary keys
and finally falling back to a default value."""
for var in var_list:
if var in os.environ:
return os.environ[var]
return default
for var in var_list:
if var in os.environ:
return os.environ[var]
return default
def QualifiedTarget(build_file, target, toolset):
# "Qualified" means the file that a target was defined in and the target
# name, separated by a colon, suffixed by a # and the toolset name:
# /path/to/file.gyp:target_name#toolset
fully_qualified = build_file + ':' + target
if toolset:
fully_qualified = fully_qualified + '#' + toolset
return fully_qualified
# "Qualified" means the file that a target was defined in and the target
# name, separated by a colon, suffixed by a # and the toolset name:
# /path/to/file.gyp:target_name#toolset
fully_qualified = build_file + ":" + target
if toolset:
fully_qualified = fully_qualified + "#" + toolset
return fully_qualified
@memoize
def RelativePath(path, relative_to, follow_path_symlink=True):
# Assuming both |path| and |relative_to| are relative to the current
# directory, returns a relative path that identifies path relative to
# relative_to.
# If |follow_symlink_path| is true (default) and |path| is a symlink, then
# this method returns a path to the real file represented by |path|. If it is
# false, this method returns a path to the symlink. If |path| is not a
# symlink, this option has no effect.
# Assuming both |path| and |relative_to| are relative to the current
# directory, returns a relative path that identifies path relative to
# relative_to.
# If |follow_symlink_path| is true (default) and |path| is a symlink, then
# this method returns a path to the real file represented by |path|. If it is
# false, this method returns a path to the symlink. If |path| is not a
# symlink, this option has no effect.
# Convert to normalized (and therefore absolute paths).
if follow_path_symlink:
path = os.path.realpath(path)
else:
path = os.path.abspath(path)
relative_to = os.path.realpath(relative_to)
# Convert to normalized (and therefore absolute paths).
if follow_path_symlink:
path = os.path.realpath(path)
else:
path = os.path.abspath(path)
relative_to = os.path.realpath(relative_to)
# On Windows, we can't create a relative path to a different drive, so just
# use the absolute path.
if sys.platform == 'win32':
if (os.path.splitdrive(path)[0].lower() !=
os.path.splitdrive(relative_to)[0].lower()):
return path
# On Windows, we can't create a relative path to a different drive, so just
# use the absolute path.
if sys.platform == "win32":
if (
os.path.splitdrive(path)[0].lower()
!= os.path.splitdrive(relative_to)[0].lower()
):
return path
# Split the paths into components.
path_split = path.split(os.path.sep)
relative_to_split = relative_to.split(os.path.sep)
# Split the paths into components.
path_split = path.split(os.path.sep)
relative_to_split = relative_to.split(os.path.sep)
# Determine how much of the prefix the two paths share.
prefix_len = len(os.path.commonprefix([path_split, relative_to_split]))
# Determine how much of the prefix the two paths share.
prefix_len = len(os.path.commonprefix([path_split, relative_to_split]))
# Put enough ".." components to back up out of relative_to to the common
# prefix, and then append the part of path_split after the common prefix.
relative_split = [os.path.pardir] * (len(relative_to_split) - prefix_len) + \
path_split[prefix_len:]
# Put enough ".." components to back up out of relative_to to the common
# prefix, and then append the part of path_split after the common prefix.
relative_split = [os.path.pardir] * (
len(relative_to_split) - prefix_len
) + path_split[prefix_len:]
if len(relative_split) == 0:
# The paths were the same.
return ''
if len(relative_split) == 0:
# The paths were the same.
return ""
# Turn it back into a string and we're done.
return os.path.join(*relative_split)
# Turn it back into a string and we're done.
return os.path.join(*relative_split)
@memoize
def InvertRelativePath(path, toplevel_dir=None):
"""Given a path like foo/bar that is relative to toplevel_dir, return
"""Given a path like foo/bar that is relative to toplevel_dir, return
the inverse relative path back to the toplevel_dir.
E.g. os.path.normpath(os.path.join(path, InvertRelativePath(path)))
should always produce the empty string, unless the path contains symlinks.
"""
if not path:
return path
toplevel_dir = '.' if toplevel_dir is None else toplevel_dir
return RelativePath(toplevel_dir, os.path.join(toplevel_dir, path))
if not path:
return path
toplevel_dir = "." if toplevel_dir is None else toplevel_dir
return RelativePath(toplevel_dir, os.path.join(toplevel_dir, path))
def FixIfRelativePath(path, relative_to):
# Like RelativePath but returns |path| unchanged if it is absolute.
if os.path.isabs(path):
return path
return RelativePath(path, relative_to)
# Like RelativePath but returns |path| unchanged if it is absolute.
if os.path.isabs(path):
return path
return RelativePath(path, relative_to)
def UnrelativePath(path, relative_to):
# Assuming that |relative_to| is relative to the current directory, and |path|
# is a path relative to the dirname of |relative_to|, returns a path that
# identifies |path| relative to the current directory.
rel_dir = os.path.dirname(relative_to)
return os.path.normpath(os.path.join(rel_dir, path))
# Assuming that |relative_to| is relative to the current directory, and |path|
# is a path relative to the dirname of |relative_to|, returns a path that
# identifies |path| relative to the current directory.
rel_dir = os.path.dirname(relative_to)
return os.path.normpath(os.path.join(rel_dir, path))
# re objects used by EncodePOSIXShellArgument. See IEEE 1003.1 XCU.2.2 at
@ -234,7 +240,7 @@ def UnrelativePath(path, relative_to):
# This does not match the characters in _escape, because those need to be
# backslash-escaped regardless of whether they appear in a double-quoted
# string.
_quote = re.compile('[\t\n #$%&\'()*;<=>?[{|}~]|^$')
_quote = re.compile("[\t\n #$%&'()*;<=>?[{|}~]|^$")
# _escape is a pattern that should match any character that needs to be
# escaped with a backslash, whether or not the argument matched the _quote
@ -262,8 +268,9 @@ _quote = re.compile('[\t\n #$%&\'()*;<=>?[{|}~]|^$')
# shells, there is no room for error here by ignoring !.
_escape = re.compile(r'(["\\`])')
def EncodePOSIXShellArgument(argument):
"""Encodes |argument| suitably for consumption by POSIX shells.
"""Encodes |argument| suitably for consumption by POSIX shells.
argument may be quoted and escaped as necessary to ensure that POSIX shells
treat the returned value as a literal representing the argument passed to
@ -272,67 +279,67 @@ def EncodePOSIXShellArgument(argument):
references to variables to be expanded by the shell.
"""
if not isinstance(argument, str):
argument = str(argument)
if not isinstance(argument, str):
argument = str(argument)
if _quote.search(argument):
quote = '"'
else:
quote = ''
if _quote.search(argument):
quote = '"'
else:
quote = ""
encoded = quote + re.sub(_escape, r'\\\1', argument) + quote
encoded = quote + re.sub(_escape, r"\\\1", argument) + quote
return encoded
return encoded
def EncodePOSIXShellList(list):
"""Encodes |list| suitably for consumption by POSIX shells.
"""Encodes |list| suitably for consumption by POSIX shells.
Returns EncodePOSIXShellArgument for each item in list, and joins them
together using the space character as an argument separator.
"""
encoded_arguments = []
for argument in list:
encoded_arguments.append(EncodePOSIXShellArgument(argument))
return ' '.join(encoded_arguments)
encoded_arguments = []
for argument in list:
encoded_arguments.append(EncodePOSIXShellArgument(argument))
return " ".join(encoded_arguments)
def DeepDependencyTargets(target_dicts, roots):
"""Returns the recursive list of target dependencies."""
dependencies = set()
pending = set(roots)
while pending:
# Pluck out one.
r = pending.pop()
# Skip if visited already.
if r in dependencies:
continue
# Add it.
dependencies.add(r)
# Add its children.
spec = target_dicts[r]
pending.update(set(spec.get('dependencies', [])))
pending.update(set(spec.get('dependencies_original', [])))
return list(dependencies - set(roots))
"""Returns the recursive list of target dependencies."""
dependencies = set()
pending = set(roots)
while pending:
# Pluck out one.
r = pending.pop()
# Skip if visited already.
if r in dependencies:
continue
# Add it.
dependencies.add(r)
# Add its children.
spec = target_dicts[r]
pending.update(set(spec.get("dependencies", [])))
pending.update(set(spec.get("dependencies_original", [])))
return list(dependencies - set(roots))
def BuildFileTargets(target_list, build_file):
"""From a target_list, returns the subset from the specified build_file.
"""From a target_list, returns the subset from the specified build_file.
"""
return [p for p in target_list if BuildFile(p) == build_file]
return [p for p in target_list if BuildFile(p) == build_file]
def AllTargets(target_list, target_dicts, build_file):
"""Returns all targets (direct and dependencies) for the specified build_file.
"""Returns all targets (direct and dependencies) for the specified build_file.
"""
bftargets = BuildFileTargets(target_list, build_file)
deptargets = DeepDependencyTargets(target_dicts, bftargets)
return bftargets + deptargets
bftargets = BuildFileTargets(target_list, build_file)
deptargets = DeepDependencyTargets(target_dicts, bftargets)
return bftargets + deptargets
def WriteOnDiff(filename):
"""Write to a file only if the new contents differ.
"""Write to a file only if the new contents differ.
Arguments:
filename: name of the file to potentially write to.
@ -341,148 +348,146 @@ def WriteOnDiff(filename):
the target if it differs (on close).
"""
class Writer(object):
"""Wrapper around file which only covers the target if it differs."""
def __init__(self):
# On Cygwin remove the "dir" argument because `C:` prefixed paths are treated as relative,
# consequently ending up with current dir "/cygdrive/c/..." being prefixed to those, which was
# obviously a non-existent path, for example: "/cygdrive/c/<some folder>/C:\<my win style abs path>".
# See https://docs.python.org/2/library/tempfile.html#tempfile.mkstemp for more details
base_temp_dir = "" if IsCygwin() else os.path.dirname(filename)
# Pick temporary file.
tmp_fd, self.tmp_path = tempfile.mkstemp(
suffix='.tmp',
prefix=os.path.split(filename)[1] + '.gyp.',
dir=base_temp_dir)
try:
self.tmp_file = os.fdopen(tmp_fd, 'wb')
except Exception:
# Don't leave turds behind.
os.unlink(self.tmp_path)
raise
class Writer(object):
"""Wrapper around file which only covers the target if it differs."""
def __getattr__(self, attrname):
# Delegate everything else to self.tmp_file
return getattr(self.tmp_file, attrname)
def __init__(self):
# On Cygwin remove the "dir" argument because `C:` prefixed paths are treated as relative,
# consequently ending up with current dir "/cygdrive/c/..." being prefixed to those, which was
# obviously a non-existent path, for example: "/cygdrive/c/<some folder>/C:\<my win style abs path>".
# See https://docs.python.org/2/library/tempfile.html#tempfile.mkstemp for more details
base_temp_dir = "" if IsCygwin() else os.path.dirname(filename)
# Pick temporary file.
tmp_fd, self.tmp_path = tempfile.mkstemp(
suffix=".tmp",
prefix=os.path.split(filename)[1] + ".gyp.",
dir=base_temp_dir,
)
try:
self.tmp_file = os.fdopen(tmp_fd, "w")
except Exception:
# Don't leave turds behind.
os.unlink(self.tmp_path)
raise
def close(self):
try:
# Close tmp file.
self.tmp_file.close()
# Determine if different.
same = False
try:
same = filecmp.cmp(self.tmp_path, filename, False)
except OSError as e:
if e.errno != errno.ENOENT:
raise
def __getattr__(self, attrname):
# Delegate everything else to self.tmp_file
return getattr(self.tmp_file, attrname)
if same:
# The new file is identical to the old one, just get rid of the new
# one.
os.unlink(self.tmp_path)
else:
# The new file is different from the old one, or there is no old one.
# Rename the new file to the permanent name.
#
# tempfile.mkstemp uses an overly restrictive mode, resulting in a
# file that can only be read by the owner, regardless of the umask.
# There's no reason to not respect the umask here, which means that
# an extra hoop is required to fetch it and reset the new file's mode.
#
# No way to get the umask without setting a new one? Set a safe one
# and then set it back to the old value.
umask = os.umask(0o77)
os.umask(umask)
os.chmod(self.tmp_path, 0o666 & ~umask)
if sys.platform == 'win32' and os.path.exists(filename):
# NOTE: on windows (but not cygwin) rename will not replace an
# existing file, so it must be preceded with a remove. Sadly there
# is no way to make the switch atomic.
os.remove(filename)
os.rename(self.tmp_path, filename)
except Exception:
# Don't leave turds behind.
os.unlink(self.tmp_path)
raise
def close(self):
try:
# Close tmp file.
self.tmp_file.close()
# Determine if different.
same = False
try:
same = filecmp.cmp(self.tmp_path, filename, False)
except OSError as e:
if e.errno != errno.ENOENT:
raise
def write(self, s):
self.tmp_file.write(s.encode('utf-8'))
if same:
# The new file is identical to the old one, just get rid of the new
# one.
os.unlink(self.tmp_path)
else:
# The new file is different from the old one, or there is no old one.
# Rename the new file to the permanent name.
#
# tempfile.mkstemp uses an overly restrictive mode, resulting in a
# file that can only be read by the owner, regardless of the umask.
# There's no reason to not respect the umask here, which means that
# an extra hoop is required to fetch it and reset the new file's mode.
#
# No way to get the umask without setting a new one? Set a safe one
# and then set it back to the old value.
umask = os.umask(0o77)
os.umask(umask)
os.chmod(self.tmp_path, 0o666 & ~umask)
if sys.platform == "win32" and os.path.exists(filename):
# NOTE: on windows (but not cygwin) rename will not replace an
# existing file, so it must be preceded with a remove. Sadly there
# is no way to make the switch atomic.
os.remove(filename)
os.rename(self.tmp_path, filename)
except Exception:
# Don't leave turds behind.
os.unlink(self.tmp_path)
raise
return Writer()
def write(self, s):
self.tmp_file.write(s.encode("utf-8"))
return Writer()
def EnsureDirExists(path):
"""Make sure the directory for |path| exists."""
try:
os.makedirs(os.path.dirname(path))
except OSError:
pass
"""Make sure the directory for |path| exists."""
try:
os.makedirs(os.path.dirname(path))
except OSError:
pass
def GetFlavor(params):
"""Returns |params.flavor| if it's set, the system's default flavor else."""
flavors = {
'cygwin': 'win',
'win32': 'win',
'darwin': 'mac',
}
"""Returns |params.flavor| if it's set, the system's default flavor else."""
flavors = {
"cygwin": "win",
"win32": "win",
"darwin": "mac",
}
if 'flavor' in params:
return params['flavor']
if sys.platform in flavors:
return flavors[sys.platform]
if sys.platform.startswith('sunos'):
return 'solaris'
if sys.platform.startswith(('dragonfly', 'freebsd')):
return 'freebsd'
if sys.platform.startswith('openbsd'):
return 'openbsd'
if sys.platform.startswith('netbsd'):
return 'netbsd'
if sys.platform.startswith('aix'):
return 'aix'
if sys.platform.startswith(('os390', 'zos')):
return 'zos'
if "flavor" in params:
return params["flavor"]
if sys.platform in flavors:
return flavors[sys.platform]
if sys.platform.startswith("sunos"):
return "solaris"
if sys.platform.startswith(("dragonfly", "freebsd")):
return "freebsd"
if sys.platform.startswith("openbsd"):
return "openbsd"
if sys.platform.startswith("netbsd"):
return "netbsd"
if sys.platform.startswith("aix"):
return "aix"
if sys.platform.startswith(("os390", "zos")):
return "zos"
return 'linux'
return "linux"
def CopyTool(flavor, out_path, generator_flags={}):
"""Finds (flock|mac|win)_tool.gyp in the gyp directory and copies it
"""Finds (flock|mac|win)_tool.gyp in the gyp directory and copies it
to |out_path|."""
# aix and solaris just need flock emulation. mac and win use more complicated
# support scripts.
prefix = {
'aix': 'flock',
'solaris': 'flock',
'mac': 'mac',
'win': 'win'
}.get(flavor, None)
if not prefix:
return
# aix and solaris just need flock emulation. mac and win use more complicated
# support scripts.
prefix = {"aix": "flock", "solaris": "flock", "mac": "mac", "win": "win"}.get(
flavor, None
)
if not prefix:
return
# Slurp input file.
source_path = os.path.join(
os.path.dirname(os.path.abspath(__file__)), '%s_tool.py' % prefix)
with open(source_path) as source_file:
source = source_file.readlines()
# Slurp input file.
source_path = os.path.join(
os.path.dirname(os.path.abspath(__file__)), "%s_tool.py" % prefix
)
with open(source_path) as source_file:
source = source_file.readlines()
# Set custom header flags.
header = '# Generated by gyp. Do not edit.\n'
mac_toolchain_dir = generator_flags.get('mac_toolchain_dir', None)
if flavor == 'mac' and mac_toolchain_dir:
header += "import os;\nos.environ['DEVELOPER_DIR']='%s'\n" \
% mac_toolchain_dir
# Set custom header flags.
header = "# Generated by gyp. Do not edit.\n"
mac_toolchain_dir = generator_flags.get("mac_toolchain_dir", None)
if flavor == "mac" and mac_toolchain_dir:
header += "import os;\nos.environ['DEVELOPER_DIR']='%s'\n" % mac_toolchain_dir
# Add header and write it out.
tool_path = os.path.join(out_path, 'gyp-%s-tool' % prefix)
with open(tool_path, 'w') as tool_file:
tool_file.write(
''.join([source[0], header] + source[1:]))
# Add header and write it out.
tool_path = os.path.join(out_path, "gyp-%s-tool" % prefix)
with open(tool_path, "w") as tool_file:
tool_file.write("".join([source[0], header] + source[1:]))
# Make file executable.
os.chmod(tool_path, 0o755)
# Make file executable.
os.chmod(tool_path, 0o755)
# From Alex Martelli,
@ -491,14 +496,14 @@ def CopyTool(flavor, out_path, generator_flags={}):
# First comment, dated 2001/10/13.
# (Also in the printed Python Cookbook.)
def uniquer(seq, idfun=None):
if idfun is None:
idfun = lambda x: x
def uniquer(seq, idfun=lambda x: x):
seen = {}
result = []
for item in seq:
marker = idfun(item)
if marker in seen: continue
if marker in seen:
continue
seen[marker] = 1
result.append(item)
return result
@ -506,80 +511,82 @@ def uniquer(seq, idfun=None):
# Based on http://code.activestate.com/recipes/576694/.
class OrderedSet(MutableSet):
def __init__(self, iterable=None):
self.end = end = []
end += [None, end, end] # sentinel node for doubly linked list
self.map = {} # key --> [key, prev, next]
if iterable is not None:
self |= iterable
def __init__(self, iterable=None):
self.end = end = []
end += [None, end, end] # sentinel node for doubly linked list
self.map = {} # key --> [key, prev, next]
if iterable is not None:
self |= iterable
def __len__(self):
return len(self.map)
def __len__(self):
return len(self.map)
def __contains__(self, key):
return key in self.map
def __contains__(self, key):
return key in self.map
def add(self, key):
if key not in self.map:
end = self.end
curr = end[1]
curr[2] = end[1] = self.map[key] = [key, curr, end]
def add(self, key):
if key not in self.map:
end = self.end
curr = end[1]
curr[2] = end[1] = self.map[key] = [key, curr, end]
def discard(self, key):
if key in self.map:
key, prev_item, next_item = self.map.pop(key)
prev_item[2] = next_item
next_item[1] = prev_item
def discard(self, key):
if key in self.map:
key, prev_item, next_item = self.map.pop(key)
prev_item[2] = next_item
next_item[1] = prev_item
def __iter__(self):
end = self.end
curr = end[2]
while curr is not end:
yield curr[0]
curr = curr[2]
def __iter__(self):
end = self.end
curr = end[2]
while curr is not end:
yield curr[0]
curr = curr[2]
def __reversed__(self):
end = self.end
curr = end[1]
while curr is not end:
yield curr[0]
curr = curr[1]
def __reversed__(self):
end = self.end
curr = end[1]
while curr is not end:
yield curr[0]
curr = curr[1]
# The second argument is an addition that causes a pylint warning.
def pop(self, last=True): # pylint: disable=W0221
if not self:
raise KeyError('set is empty')
key = self.end[1][0] if last else self.end[2][0]
self.discard(key)
return key
# The second argument is an addition that causes a pylint warning.
def pop(self, last=True): # pylint: disable=W0221
if not self:
raise KeyError("set is empty")
key = self.end[1][0] if last else self.end[2][0]
self.discard(key)
return key
def __repr__(self):
if not self:
return '%s()' % (self.__class__.__name__,)
return '%s(%r)' % (self.__class__.__name__, list(self))
def __repr__(self):
if not self:
return "%s()" % (self.__class__.__name__,)
return "%s(%r)" % (self.__class__.__name__, list(self))
def __eq__(self, other):
if isinstance(other, OrderedSet):
return len(self) == len(other) and list(self) == list(other)
return set(self) == set(other)
def __eq__(self, other):
if isinstance(other, OrderedSet):
return len(self) == len(other) and list(self) == list(other)
return set(self) == set(other)
# Extensions to the recipe.
def update(self, iterable):
for i in iterable:
if i not in self:
self.add(i)
# Extensions to the recipe.
def update(self, iterable):
for i in iterable:
if i not in self:
self.add(i)
class CycleError(Exception):
"""An exception raised when an unexpected cycle is detected."""
def __init__(self, nodes):
self.nodes = nodes
def __str__(self):
return 'CycleError: cycle involving: ' + str(self.nodes)
"""An exception raised when an unexpected cycle is detected."""
def __init__(self, nodes):
self.nodes = nodes
def __str__(self):
return "CycleError: cycle involving: " + str(self.nodes)
def TopologicallySorted(graph, get_edges):
r"""Topologically sort based on a user provided edge definition.
r"""Topologically sort based on a user provided edge definition.
Args:
graph: A list of node names.
@ -599,44 +606,50 @@ def TopologicallySorted(graph, get_edges):
==>
['a', 'c', b']
"""
get_edges = memoize(get_edges)
visited = set()
visiting = set()
ordered_nodes = []
def Visit(node):
if node in visiting:
raise CycleError(visiting)
if node in visited:
return
visited.add(node)
visiting.add(node)
for neighbor in get_edges(node):
Visit(neighbor)
visiting.remove(node)
ordered_nodes.insert(0, node)
for node in sorted(graph):
Visit(node)
return ordered_nodes
get_edges = memoize(get_edges)
visited = set()
visiting = set()
ordered_nodes = []
def Visit(node):
if node in visiting:
raise CycleError(visiting)
if node in visited:
return
visited.add(node)
visiting.add(node)
for neighbor in get_edges(node):
Visit(neighbor)
visiting.remove(node)
ordered_nodes.insert(0, node)
for node in sorted(graph):
Visit(node)
return ordered_nodes
def CrossCompileRequested():
# TODO: figure out how to not build extra host objects in the
# non-cross-compile case when this is enabled, and enable unconditionally.
return (os.environ.get('GYP_CROSSCOMPILE') or
os.environ.get('AR_host') or
os.environ.get('CC_host') or
os.environ.get('CXX_host') or
os.environ.get('AR_target') or
os.environ.get('CC_target') or
os.environ.get('CXX_target'))
# TODO: figure out how to not build extra host objects in the
# non-cross-compile case when this is enabled, and enable unconditionally.
return (
os.environ.get("GYP_CROSSCOMPILE")
or os.environ.get("AR_host")
or os.environ.get("CC_host")
or os.environ.get("CXX_host")
or os.environ.get("AR_target")
or os.environ.get("CC_target")
or os.environ.get("CXX_target")
)
def IsCygwin():
try:
out = subprocess.Popen("uname",
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
stdout, stderr = out.communicate()
if PY3:
stdout = stdout.decode("utf-8")
return "CYGWIN" in str(stdout)
except Exception:
return False
try:
out = subprocess.Popen(
"uname", stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
stdout, stderr = out.communicate()
if PY3:
stdout = stdout.decode("utf-8")
return "CYGWIN" in str(stdout)
except Exception:
return False

View file

@ -12,61 +12,67 @@ import sys
class TestTopologicallySorted(unittest.TestCase):
def test_Valid(self):
"""Test that sorting works on a valid graph with one possible order."""
graph = {
'a': ['b', 'c'],
'b': [],
'c': ['d'],
'd': ['b'],
def test_Valid(self):
"""Test that sorting works on a valid graph with one possible order."""
graph = {
"a": ["b", "c"],
"b": [],
"c": ["d"],
"d": ["b"],
}
def GetEdge(node):
return tuple(graph[node])
self.assertEqual(
gyp.common.TopologicallySorted(graph.keys(), GetEdge),
['a', 'c', 'd', 'b'])
def test_Cycle(self):
"""Test that an exception is thrown on a cyclic graph."""
graph = {
'a': ['b'],
'b': ['c'],
'c': ['d'],
'd': ['a'],
def GetEdge(node):
return tuple(graph[node])
self.assertEqual(
gyp.common.TopologicallySorted(graph.keys(), GetEdge), ["a", "c", "d", "b"]
)
def test_Cycle(self):
"""Test that an exception is thrown on a cyclic graph."""
graph = {
"a": ["b"],
"b": ["c"],
"c": ["d"],
"d": ["a"],
}
def GetEdge(node):
return tuple(graph[node])
self.assertRaises(
gyp.common.CycleError, gyp.common.TopologicallySorted,
graph.keys(), GetEdge)
def GetEdge(node):
return tuple(graph[node])
self.assertRaises(
gyp.common.CycleError, gyp.common.TopologicallySorted, graph.keys(), GetEdge
)
class TestGetFlavor(unittest.TestCase):
"""Test that gyp.common.GetFlavor works as intended"""
original_platform = ''
"""Test that gyp.common.GetFlavor works as intended"""
def setUp(self):
self.original_platform = sys.platform
original_platform = ""
def tearDown(self):
sys.platform = self.original_platform
def setUp(self):
self.original_platform = sys.platform
def assertFlavor(self, expected, argument, param):
sys.platform = argument
self.assertEqual(expected, gyp.common.GetFlavor(param))
def tearDown(self):
sys.platform = self.original_platform
def test_platform_default(self):
self.assertFlavor('freebsd', 'freebsd9' , {})
self.assertFlavor('freebsd', 'freebsd10', {})
self.assertFlavor('openbsd', 'openbsd5' , {})
self.assertFlavor('solaris', 'sunos5' , {})
self.assertFlavor('solaris', 'sunos' , {})
self.assertFlavor('linux' , 'linux2' , {})
self.assertFlavor('linux' , 'linux3' , {})
def assertFlavor(self, expected, argument, param):
sys.platform = argument
self.assertEqual(expected, gyp.common.GetFlavor(param))
def test_param(self):
self.assertFlavor('foobar', 'linux2' , {'flavor': 'foobar'})
def test_platform_default(self):
self.assertFlavor("freebsd", "freebsd9", {})
self.assertFlavor("freebsd", "freebsd10", {})
self.assertFlavor("openbsd", "openbsd5", {})
self.assertFlavor("solaris", "sunos5", {})
self.assertFlavor("solaris", "sunos", {})
self.assertFlavor("linux", "linux2", {})
self.assertFlavor("linux", "linux3", {})
self.assertFlavor("linux", "linux", {})
def test_param(self):
self.assertFlavor("foobar", "linux2", {"flavor": "foobar"})
if __name__ == '__main__':
unittest.main()
if __name__ == "__main__":
unittest.main()

View file

@ -8,8 +8,8 @@ import locale
from functools import reduce
def XmlToString(content, encoding='utf-8', pretty=False):
""" Writes the XML content to disk, touching the file only if it has changed.
def XmlToString(content, encoding="utf-8", pretty=False):
""" Writes the XML content to disk, touching the file only if it has changed.
Visual Studio files have a lot of pre-defined structures. This function makes
it easy to represent these structures as Python data structures, instead of
@ -46,18 +46,18 @@ def XmlToString(content, encoding='utf-8', pretty=False):
Returns:
The XML content as a string.
"""
# We create a huge list of all the elements of the file.
xml_parts = ['<?xml version="1.0" encoding="%s"?>' % encoding]
if pretty:
xml_parts.append('\n')
_ConstructContentList(xml_parts, content, pretty)
# We create a huge list of all the elements of the file.
xml_parts = ['<?xml version="1.0" encoding="%s"?>' % encoding]
if pretty:
xml_parts.append("\n")
_ConstructContentList(xml_parts, content, pretty)
# Convert it to a string
return ''.join(xml_parts)
# Convert it to a string
return "".join(xml_parts)
def _ConstructContentList(xml_parts, specification, pretty, level=0):
""" Appends the XML parts corresponding to the specification.
""" Appends the XML parts corresponding to the specification.
Args:
xml_parts: A list of XML parts to be appended to.
@ -65,48 +65,49 @@ def _ConstructContentList(xml_parts, specification, pretty, level=0):
pretty: True if we want pretty printing with indents and new lines.
level: Indentation level.
"""
# The first item in a specification is the name of the element.
if pretty:
indentation = ' ' * level
new_line = '\n'
else:
indentation = ''
new_line = ''
name = specification[0]
if not isinstance(name, str):
raise Exception('The first item of an EasyXml specification should be '
'a string. Specification was ' + str(specification))
xml_parts.append(indentation + '<' + name)
# The first item in a specification is the name of the element.
if pretty:
indentation = " " * level
new_line = "\n"
else:
indentation = ""
new_line = ""
name = specification[0]
if not isinstance(name, str):
raise Exception(
"The first item of an EasyXml specification should be "
"a string. Specification was " + str(specification)
)
xml_parts.append(indentation + "<" + name)
# Optionally in second position is a dictionary of the attributes.
rest = specification[1:]
if rest and isinstance(rest[0], dict):
for at, val in sorted(rest[0].items()):
xml_parts.append(' %s="%s"' % (at, _XmlEscape(val, attr=True)))
rest = rest[1:]
if rest:
xml_parts.append('>')
all_strings = reduce(lambda x, y: x and isinstance(y, str), rest, True)
multi_line = not all_strings
if multi_line and new_line:
xml_parts.append(new_line)
for child_spec in rest:
# If it's a string, append a text node.
# Otherwise recurse over that child definition
if isinstance(child_spec, str):
xml_parts.append(_XmlEscape(child_spec))
else:
_ConstructContentList(xml_parts, child_spec, pretty, level + 1)
if multi_line and indentation:
xml_parts.append(indentation)
xml_parts.append('</%s>%s' % (name, new_line))
else:
xml_parts.append('/>%s' % new_line)
# Optionally in second position is a dictionary of the attributes.
rest = specification[1:]
if rest and isinstance(rest[0], dict):
for at, val in sorted(rest[0].items()):
xml_parts.append(' %s="%s"' % (at, _XmlEscape(val, attr=True)))
rest = rest[1:]
if rest:
xml_parts.append(">")
all_strings = reduce(lambda x, y: x and isinstance(y, str), rest, True)
multi_line = not all_strings
if multi_line and new_line:
xml_parts.append(new_line)
for child_spec in rest:
# If it's a string, append a text node.
# Otherwise recurse over that child definition
if isinstance(child_spec, str):
xml_parts.append(_XmlEscape(child_spec))
else:
_ConstructContentList(xml_parts, child_spec, pretty, level + 1)
if multi_line and indentation:
xml_parts.append(indentation)
xml_parts.append("</%s>%s" % (name, new_line))
else:
xml_parts.append("/>%s" % new_line)
def WriteXmlIfChanged(content, path, encoding='utf-8', pretty=False,
win32=False):
""" Writes the XML content to disk, touching the file only if it has changed.
def WriteXmlIfChanged(content, path, encoding="utf-8", pretty=False, win32=False):
""" Writes the XML content to disk, touching the file only if it has changed.
Args:
content: The structured content to be written.
@ -114,50 +115,49 @@ def WriteXmlIfChanged(content, path, encoding='utf-8', pretty=False,
encoding: The encoding to report on the first line of the XML file.
pretty: True if we want pretty printing with indents and new lines.
"""
xml_string = XmlToString(content, encoding, pretty)
if win32 and os.linesep != '\r\n':
xml_string = xml_string.replace('\n', '\r\n')
xml_string = XmlToString(content, encoding, pretty)
if win32 and os.linesep != "\r\n":
xml_string = xml_string.replace("\n", "\r\n")
default_encoding = locale.getdefaultlocale()[1]
if default_encoding and default_encoding.upper() != encoding.upper():
xml_string = xml_string.encode(encoding)
default_encoding = locale.getdefaultlocale()[1]
if default_encoding and default_encoding.upper() != encoding.upper():
xml_string = xml_string.encode(encoding)
# Get the old content
try:
f = open(path, 'r')
existing = f.read()
f.close()
except:
existing = None
# Get the old content
try:
with open(path, "r") as file:
existing = file.read()
except IOError:
existing = None
# It has changed, write it
if existing != xml_string:
f = open(path, 'wb')
f.write(xml_string)
f.close()
# It has changed, write it
if existing != xml_string:
with open(path, "wb") as file:
file.write(xml_string)
_xml_escape_map = {
'"': '&quot;',
"'": '&apos;',
'<': '&lt;',
'>': '&gt;',
'&': '&amp;',
'\n': '&#xA;',
'\r': '&#xD;',
'"': "&quot;",
"'": "&apos;",
"<": "&lt;",
">": "&gt;",
"&": "&amp;",
"\n": "&#xA;",
"\r": "&#xD;",
}
_xml_escape_re = re.compile(
"(%s)" % "|".join(map(re.escape, _xml_escape_map.keys())))
_xml_escape_re = re.compile("(%s)" % "|".join(map(re.escape, _xml_escape_map.keys())))
def _XmlEscape(value, attr=False):
""" Escape a string for inclusion in XML."""
def replace(match):
m = match.string[match.start() : match.end()]
# don't replace single quotes in attrs
if attr and m == "'":
return m
return _xml_escape_map[m]
return _xml_escape_re.sub(replace, value)
""" Escape a string for inclusion in XML."""
def replace(match):
m = match.string[match.start() : match.end()]
# don't replace single quotes in attrs
if attr and m == "'":
return m
return _xml_escape_map[m]
return _xml_escape_re.sub(replace, value)

View file

@ -10,98 +10,103 @@ import gyp.easy_xml as easy_xml
import unittest
try:
from StringIO import StringIO # Python 2
from StringIO import StringIO # Python 2
except ImportError:
from io import StringIO # Python 3
from io import StringIO # Python 3
class TestSequenceFunctions(unittest.TestCase):
def setUp(self):
self.stderr = StringIO()
def setUp(self):
self.stderr = StringIO()
def test_EasyXml_simple(self):
self.assertEqual(
easy_xml.XmlToString(["test"]),
'<?xml version="1.0" encoding="utf-8"?><test/>',
)
def test_EasyXml_simple(self):
self.assertEqual(
easy_xml.XmlToString(['test']),
'<?xml version="1.0" encoding="utf-8"?><test/>')
self.assertEqual(
easy_xml.XmlToString(["test"], encoding="Windows-1252"),
'<?xml version="1.0" encoding="Windows-1252"?><test/>',
)
self.assertEqual(
easy_xml.XmlToString(['test'], encoding='Windows-1252'),
'<?xml version="1.0" encoding="Windows-1252"?><test/>')
def test_EasyXml_simple_with_attributes(self):
self.assertEqual(
easy_xml.XmlToString(["test2", {"a": "value1", "b": "value2"}]),
'<?xml version="1.0" encoding="utf-8"?><test2 a="value1" b="value2"/>',
)
def test_EasyXml_simple_with_attributes(self):
self.assertEqual(
easy_xml.XmlToString(['test2', {'a': 'value1', 'b': 'value2'}]),
'<?xml version="1.0" encoding="utf-8"?><test2 a="value1" b="value2"/>')
def test_EasyXml_escaping(self):
original = "<test>'\"\r&\nfoo"
converted = "&lt;test&gt;'&quot;&#xD;&amp;&#xA;foo"
converted_apos = converted.replace("'", "&apos;")
self.assertEqual(
easy_xml.XmlToString(["test3", {"a": original}, original]),
'<?xml version="1.0" encoding="utf-8"?><test3 a="%s">%s</test3>'
% (converted, converted_apos),
)
def test_EasyXml_escaping(self):
original = '<test>\'"\r&\nfoo'
converted = '&lt;test&gt;\'&quot;&#xD;&amp;&#xA;foo'
converted_apos = converted.replace("'", '&apos;')
self.assertEqual(
easy_xml.XmlToString(['test3', {'a': original}, original]),
'<?xml version="1.0" encoding="utf-8"?><test3 a="%s">%s</test3>' %
(converted, converted_apos))
def test_EasyXml_pretty(self):
self.assertEqual(
easy_xml.XmlToString(
["test3", ["GrandParent", ["Parent1", ["Child"]], ["Parent2"]]],
pretty=True,
),
'<?xml version="1.0" encoding="utf-8"?>\n'
"<test3>\n"
" <GrandParent>\n"
" <Parent1>\n"
" <Child/>\n"
" </Parent1>\n"
" <Parent2/>\n"
" </GrandParent>\n"
"</test3>\n",
)
def test_EasyXml_pretty(self):
self.assertEqual(
easy_xml.XmlToString(
['test3',
['GrandParent',
['Parent1',
['Child']
],
['Parent2']
def test_EasyXml_complex(self):
# We want to create:
target = (
'<?xml version="1.0" encoding="utf-8"?>'
"<Project>"
'<PropertyGroup Label="Globals">'
"<ProjectGuid>{D2250C20-3A94-4FB9-AF73-11BC5B73884B}</ProjectGuid>"
"<Keyword>Win32Proj</Keyword>"
"<RootNamespace>automated_ui_tests</RootNamespace>"
"</PropertyGroup>"
'<Import Project="$(VCTargetsPath)\\Microsoft.Cpp.props"/>'
"<PropertyGroup "
"Condition=\"'$(Configuration)|$(Platform)'=="
'\'Debug|Win32\'" Label="Configuration">'
"<ConfigurationType>Application</ConfigurationType>"
"<CharacterSet>Unicode</CharacterSet>"
"</PropertyGroup>"
"</Project>"
)
xml = easy_xml.XmlToString(
[
"Project",
[
"PropertyGroup",
{"Label": "Globals"},
["ProjectGuid", "{D2250C20-3A94-4FB9-AF73-11BC5B73884B}"],
["Keyword", "Win32Proj"],
["RootNamespace", "automated_ui_tests"],
],
["Import", {"Project": "$(VCTargetsPath)\\Microsoft.Cpp.props"}],
[
"PropertyGroup",
{
"Condition": "'$(Configuration)|$(Platform)'=='Debug|Win32'",
"Label": "Configuration",
},
["ConfigurationType", "Application"],
["CharacterSet", "Unicode"],
],
]
],
pretty=True),
'<?xml version="1.0" encoding="utf-8"?>\n'
'<test3>\n'
' <GrandParent>\n'
' <Parent1>\n'
' <Child/>\n'
' </Parent1>\n'
' <Parent2/>\n'
' </GrandParent>\n'
'</test3>\n')
)
self.assertEqual(xml, target)
def test_EasyXml_complex(self):
# We want to create:
target = (
'<?xml version="1.0" encoding="utf-8"?>'
'<Project>'
'<PropertyGroup Label="Globals">'
'<ProjectGuid>{D2250C20-3A94-4FB9-AF73-11BC5B73884B}</ProjectGuid>'
'<Keyword>Win32Proj</Keyword>'
'<RootNamespace>automated_ui_tests</RootNamespace>'
'</PropertyGroup>'
'<Import Project="$(VCTargetsPath)\\Microsoft.Cpp.props"/>'
'<PropertyGroup '
'Condition="\'$(Configuration)|$(Platform)\'=='
'\'Debug|Win32\'" Label="Configuration">'
'<ConfigurationType>Application</ConfigurationType>'
'<CharacterSet>Unicode</CharacterSet>'
'</PropertyGroup>'
'</Project>')
xml = easy_xml.XmlToString(
['Project',
['PropertyGroup', {'Label': 'Globals'},
['ProjectGuid', '{D2250C20-3A94-4FB9-AF73-11BC5B73884B}'],
['Keyword', 'Win32Proj'],
['RootNamespace', 'automated_ui_tests']
],
['Import', {'Project': '$(VCTargetsPath)\\Microsoft.Cpp.props'}],
['PropertyGroup',
{'Condition': "'$(Configuration)|$(Platform)'=='Debug|Win32'",
'Label': 'Configuration'},
['ConfigurationType', 'Application'],
['CharacterSet', 'Unicode']
]
])
self.assertEqual(xml, target)
if __name__ == '__main__':
unittest.main()
if __name__ == "__main__":
unittest.main()

View file

@ -14,41 +14,42 @@ import sys
def main(args):
executor = FlockTool()
executor.Dispatch(args)
executor = FlockTool()
executor.Dispatch(args)
class FlockTool(object):
"""This class emulates the 'flock' command."""
def Dispatch(self, args):
"""Dispatches a string command to a method."""
if len(args) < 1:
raise Exception("Not enough arguments")
"""This class emulates the 'flock' command."""
method = "Exec%s" % self._CommandifyName(args[0])
getattr(self, method)(*args[1:])
def Dispatch(self, args):
"""Dispatches a string command to a method."""
if len(args) < 1:
raise Exception("Not enough arguments")
def _CommandifyName(self, name_string):
"""Transforms a tool name like copy-info-plist to CopyInfoPlist"""
return name_string.title().replace('-', '')
method = "Exec%s" % self._CommandifyName(args[0])
getattr(self, method)(*args[1:])
def ExecFlock(self, lockfile, *cmd_list):
"""Emulates the most basic behavior of Linux's flock(1)."""
# Rely on exception handling to report errors.
# Note that the stock python on SunOS has a bug
# where fcntl.flock(fd, LOCK_EX) always fails
# with EBADF, that's why we use this F_SETLK
# hack instead.
fd = os.open(lockfile, os.O_WRONLY|os.O_NOCTTY|os.O_CREAT, 0o666)
if sys.platform.startswith('aix'):
# Python on AIX is compiled with LARGEFILE support, which changes the
# struct size.
op = struct.pack('hhIllqq', fcntl.F_WRLCK, 0, 0, 0, 0, 0, 0)
else:
op = struct.pack('hhllhhl', fcntl.F_WRLCK, 0, 0, 0, 0, 0, 0)
fcntl.fcntl(fd, fcntl.F_SETLK, op)
return subprocess.call(cmd_list)
def _CommandifyName(self, name_string):
"""Transforms a tool name like copy-info-plist to CopyInfoPlist"""
return name_string.title().replace("-", "")
def ExecFlock(self, lockfile, *cmd_list):
"""Emulates the most basic behavior of Linux's flock(1)."""
# Rely on exception handling to report errors.
# Note that the stock python on SunOS has a bug
# where fcntl.flock(fd, LOCK_EX) always fails
# with EBADF, that's why we use this F_SETLK
# hack instead.
fd = os.open(lockfile, os.O_WRONLY | os.O_NOCTTY | os.O_CREAT, 0o666)
if sys.platform.startswith("aix"):
# Python on AIX is compiled with LARGEFILE support, which changes the
# struct size.
op = struct.pack("hhIllqq", fcntl.F_WRLCK, 0, 0, 0, 0, 0, 0)
else:
op = struct.pack("hhllhhl", fcntl.F_WRLCK, 0, 0, 0, 0, 0, 0)
fcntl.fcntl(fd, fcntl.F_SETLK, op)
return subprocess.call(cmd_list)
if __name__ == '__main__':
sys.exit(main(sys.argv[1:]))
if __name__ == "__main__":
sys.exit(main(sys.argv[1:]))

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -16,100 +16,105 @@ generator_wants_sorted_dependencies = False
# Lifted from make.py. The actual values don't matter much.
generator_default_variables = {
'CONFIGURATION_NAME': '$(BUILDTYPE)',
'EXECUTABLE_PREFIX': '',
'EXECUTABLE_SUFFIX': '',
'INTERMEDIATE_DIR': '$(obj).$(TOOLSET)/$(TARGET)/geni',
'PRODUCT_DIR': '$(builddir)',
'RULE_INPUT_DIRNAME': '%(INPUT_DIRNAME)s',
'RULE_INPUT_EXT': '$(suffix $<)',
'RULE_INPUT_NAME': '$(notdir $<)',
'RULE_INPUT_PATH': '$(abspath $<)',
'RULE_INPUT_ROOT': '%(INPUT_ROOT)s',
'SHARED_INTERMEDIATE_DIR': '$(obj)/gen',
'SHARED_LIB_PREFIX': 'lib',
'STATIC_LIB_PREFIX': 'lib',
'STATIC_LIB_SUFFIX': '.a',
"CONFIGURATION_NAME": "$(BUILDTYPE)",
"EXECUTABLE_PREFIX": "",
"EXECUTABLE_SUFFIX": "",
"INTERMEDIATE_DIR": "$(obj).$(TOOLSET)/$(TARGET)/geni",
"PRODUCT_DIR": "$(builddir)",
"RULE_INPUT_DIRNAME": "%(INPUT_DIRNAME)s",
"RULE_INPUT_EXT": "$(suffix $<)",
"RULE_INPUT_NAME": "$(notdir $<)",
"RULE_INPUT_PATH": "$(abspath $<)",
"RULE_INPUT_ROOT": "%(INPUT_ROOT)s",
"SHARED_INTERMEDIATE_DIR": "$(obj)/gen",
"SHARED_LIB_PREFIX": "lib",
"STATIC_LIB_PREFIX": "lib",
"STATIC_LIB_SUFFIX": ".a",
}
def IsMac(params):
return 'mac' == gyp.common.GetFlavor(params)
return "mac" == gyp.common.GetFlavor(params)
def CalculateVariables(default_variables, params):
default_variables.setdefault('OS', gyp.common.GetFlavor(params))
default_variables.setdefault("OS", gyp.common.GetFlavor(params))
def AddCommandsForTarget(cwd, target, params, per_config_commands):
output_dir = params['generator_flags']['output_dir']
for configuration_name, configuration in target['configurations'].items():
builddir_name = os.path.join(output_dir, configuration_name)
output_dir = params["generator_flags"].get("output_dir", "out")
for configuration_name, configuration in target["configurations"].items():
if IsMac(params):
xcode_settings = gyp.xcode_emulation.XcodeSettings(target)
cflags = xcode_settings.GetCflags(configuration_name)
cflags_c = xcode_settings.GetCflagsC(configuration_name)
cflags_cc = xcode_settings.GetCflagsCC(configuration_name)
else:
cflags = configuration.get("cflags", [])
cflags_c = configuration.get("cflags_c", [])
cflags_cc = configuration.get("cflags_cc", [])
if IsMac(params):
xcode_settings = gyp.xcode_emulation.XcodeSettings(target)
cflags = xcode_settings.GetCflags(configuration_name)
cflags_c = xcode_settings.GetCflagsC(configuration_name)
cflags_cc = xcode_settings.GetCflagsCC(configuration_name)
else:
cflags = configuration.get('cflags', [])
cflags_c = configuration.get('cflags_c', [])
cflags_cc = configuration.get('cflags_cc', [])
cflags_c = cflags + cflags_c
cflags_cc = cflags + cflags_cc
cflags_c = cflags + cflags_c
cflags_cc = cflags + cflags_cc
defines = configuration.get("defines", [])
defines = ["-D" + s for s in defines]
defines = configuration.get('defines', [])
defines = ['-D' + s for s in defines]
# TODO(bnoordhuis) Handle generated source files.
sources = target.get("sources", [])
sources = [s for s in sources if s.endswith(".c") or s.endswith(".cc")]
# TODO(bnoordhuis) Handle generated source files.
sources = target.get('sources', [])
sources = [s for s in sources if s.endswith('.c') or s.endswith('.cc')]
def resolve(filename):
return os.path.abspath(os.path.join(cwd, filename))
def resolve(filename):
return os.path.abspath(os.path.join(cwd, filename))
# TODO(bnoordhuis) Handle generated header files.
include_dirs = configuration.get("include_dirs", [])
include_dirs = [s for s in include_dirs if not s.startswith("$(obj)")]
includes = ["-I" + resolve(s) for s in include_dirs]
# TODO(bnoordhuis) Handle generated header files.
include_dirs = configuration.get('include_dirs', [])
include_dirs = [s for s in include_dirs if not s.startswith('$(obj)')]
includes = ['-I' + resolve(s) for s in include_dirs]
defines = gyp.common.EncodePOSIXShellList(defines)
includes = gyp.common.EncodePOSIXShellList(includes)
cflags_c = gyp.common.EncodePOSIXShellList(cflags_c)
cflags_cc = gyp.common.EncodePOSIXShellList(cflags_cc)
defines = gyp.common.EncodePOSIXShellList(defines)
includes = gyp.common.EncodePOSIXShellList(includes)
cflags_c = gyp.common.EncodePOSIXShellList(cflags_c)
cflags_cc = gyp.common.EncodePOSIXShellList(cflags_cc)
commands = per_config_commands.setdefault(configuration_name, [])
for source in sources:
file = resolve(source)
isc = source.endswith('.c')
cc = 'cc' if isc else 'c++'
cflags = cflags_c if isc else cflags_cc
command = ' '.join((cc, defines, includes, cflags,
'-c', gyp.common.EncodePOSIXShellArgument(file)))
commands.append(dict(command=command, directory=output_dir, file=file))
commands = per_config_commands.setdefault(configuration_name, [])
for source in sources:
file = resolve(source)
isc = source.endswith(".c")
cc = "cc" if isc else "c++"
cflags = cflags_c if isc else cflags_cc
command = " ".join(
(
cc,
defines,
includes,
cflags,
"-c",
gyp.common.EncodePOSIXShellArgument(file),
)
)
commands.append(dict(command=command, directory=output_dir, file=file))
def GenerateOutput(target_list, target_dicts, data, params):
per_config_commands = {}
for qualified_target, target in target_dicts.items():
build_file, target_name, toolset = (
gyp.common.ParseQualifiedTarget(qualified_target))
if IsMac(params):
settings = data[build_file]
gyp.xcode_emulation.MergeGlobalXcodeSettingsToSpec(settings, target)
cwd = os.path.dirname(build_file)
AddCommandsForTarget(cwd, target, params, per_config_commands)
per_config_commands = {}
for qualified_target, target in target_dicts.items():
build_file, target_name, toolset = gyp.common.ParseQualifiedTarget(
qualified_target
)
if IsMac(params):
settings = data[build_file]
gyp.xcode_emulation.MergeGlobalXcodeSettingsToSpec(settings, target)
cwd = os.path.dirname(build_file)
AddCommandsForTarget(cwd, target, params, per_config_commands)
output_dir = params['generator_flags']['output_dir']
for configuration_name, commands in per_config_commands.items():
filename = os.path.join(output_dir,
configuration_name,
'compile_commands.json')
gyp.common.EnsureDirExists(filename)
fp = open(filename, 'w')
json.dump(commands, fp=fp, indent=0, check_circular=False)
output_dir = params["generator_flags"].get("output_dir", "out")
for configuration_name, commands in per_config_commands.items():
filename = os.path.join(output_dir, configuration_name, "compile_commands.json")
gyp.common.EnsureDirExists(filename)
fp = open(filename, "w")
json.dump(commands, fp=fp, indent=0, check_circular=False)
def PerformBuild(data, configurations, params):
pass
pass

View file

@ -1,100 +1,104 @@
from __future__ import print_function
# Copyright (c) 2012 Google Inc. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import collections
from __future__ import print_function
import os
import gyp
import gyp.common
import gyp.msvs_emulation
import json
import sys
generator_supports_multiple_toolsets = True
generator_wants_static_library_dependencies_adjusted = False
generator_filelist_paths = {
}
generator_filelist_paths = {}
generator_default_variables = {
}
for dirname in ['INTERMEDIATE_DIR', 'SHARED_INTERMEDIATE_DIR', 'PRODUCT_DIR',
'LIB_DIR', 'SHARED_LIB_DIR']:
# Some gyp steps fail if these are empty(!).
generator_default_variables[dirname] = 'dir'
for unused in ['RULE_INPUT_PATH', 'RULE_INPUT_ROOT', 'RULE_INPUT_NAME',
'RULE_INPUT_DIRNAME', 'RULE_INPUT_EXT',
'EXECUTABLE_PREFIX', 'EXECUTABLE_SUFFIX',
'STATIC_LIB_PREFIX', 'STATIC_LIB_SUFFIX',
'SHARED_LIB_PREFIX', 'SHARED_LIB_SUFFIX',
'CONFIGURATION_NAME']:
generator_default_variables[unused] = ''
generator_default_variables = {}
for dirname in [
"INTERMEDIATE_DIR",
"SHARED_INTERMEDIATE_DIR",
"PRODUCT_DIR",
"LIB_DIR",
"SHARED_LIB_DIR",
]:
# Some gyp steps fail if these are empty(!).
generator_default_variables[dirname] = "dir"
for unused in [
"RULE_INPUT_PATH",
"RULE_INPUT_ROOT",
"RULE_INPUT_NAME",
"RULE_INPUT_DIRNAME",
"RULE_INPUT_EXT",
"EXECUTABLE_PREFIX",
"EXECUTABLE_SUFFIX",
"STATIC_LIB_PREFIX",
"STATIC_LIB_SUFFIX",
"SHARED_LIB_PREFIX",
"SHARED_LIB_SUFFIX",
"CONFIGURATION_NAME",
]:
generator_default_variables[unused] = ""
def CalculateVariables(default_variables, params):
generator_flags = params.get('generator_flags', {})
for key, val in generator_flags.items():
default_variables.setdefault(key, val)
default_variables.setdefault('OS', gyp.common.GetFlavor(params))
generator_flags = params.get("generator_flags", {})
for key, val in generator_flags.items():
default_variables.setdefault(key, val)
default_variables.setdefault("OS", gyp.common.GetFlavor(params))
flavor = gyp.common.GetFlavor(params)
if flavor =='win':
# Copy additional generator configuration data from VS, which is shared
# by the Windows Ninja generator.
import gyp.generator.msvs as msvs_generator
generator_additional_non_configuration_keys = getattr(msvs_generator,
'generator_additional_non_configuration_keys', [])
generator_additional_path_sections = getattr(msvs_generator,
'generator_additional_path_sections', [])
gyp.msvs_emulation.CalculateCommonVariables(default_variables, params)
flavor = gyp.common.GetFlavor(params)
if flavor == "win":
gyp.msvs_emulation.CalculateCommonVariables(default_variables, params)
def CalculateGeneratorInputInfo(params):
"""Calculate the generator specific info that gets fed to input (called by
"""Calculate the generator specific info that gets fed to input (called by
gyp)."""
generator_flags = params.get('generator_flags', {})
if generator_flags.get('adjust_static_libraries', False):
global generator_wants_static_library_dependencies_adjusted
generator_wants_static_library_dependencies_adjusted = True
generator_flags = params.get("generator_flags", {})
if generator_flags.get("adjust_static_libraries", False):
global generator_wants_static_library_dependencies_adjusted
generator_wants_static_library_dependencies_adjusted = True
toplevel = params["options"].toplevel_dir
generator_dir = os.path.relpath(params["options"].generator_output or ".")
# output_dir: relative path from generator_dir to the build directory.
output_dir = generator_flags.get("output_dir", "out")
qualified_out_dir = os.path.normpath(
os.path.join(toplevel, generator_dir, output_dir, "gypfiles")
)
global generator_filelist_paths
generator_filelist_paths = {
"toplevel": toplevel,
"qualified_out_dir": qualified_out_dir,
}
toplevel = params['options'].toplevel_dir
generator_dir = os.path.relpath(params['options'].generator_output or '.')
# output_dir: relative path from generator_dir to the build directory.
output_dir = generator_flags.get('output_dir', 'out')
qualified_out_dir = os.path.normpath(os.path.join(
toplevel, generator_dir, output_dir, 'gypfiles'))
global generator_filelist_paths
generator_filelist_paths = {
'toplevel': toplevel,
'qualified_out_dir': qualified_out_dir,
}
def GenerateOutput(target_list, target_dicts, data, params):
# Map of target -> list of targets it depends on.
edges = {}
# Map of target -> list of targets it depends on.
edges = {}
# Queue of targets to visit.
targets_to_visit = target_list[:]
# Queue of targets to visit.
targets_to_visit = target_list[:]
while len(targets_to_visit) > 0:
target = targets_to_visit.pop()
if target in edges:
continue
edges[target] = []
while len(targets_to_visit) > 0:
target = targets_to_visit.pop()
if target in edges:
continue
edges[target] = []
for dep in target_dicts[target].get('dependencies', []):
edges[target].append(dep)
targets_to_visit.append(dep)
for dep in target_dicts[target].get("dependencies", []):
edges[target].append(dep)
targets_to_visit.append(dep)
try:
filepath = params['generator_flags']['output_dir']
except KeyError:
filepath = '.'
filename = os.path.join(filepath, 'dump.json')
f = open(filename, 'w')
json.dump(edges, f)
f.close()
print('Wrote json to %s.' % filename)
try:
filepath = params["generator_flags"]["output_dir"]
except KeyError:
filepath = "."
filename = os.path.join(filepath, "dump.json")
f = open(filename, "w")
json.dump(edges, f)
f.close()
print("Wrote json to %s." % filename)

View file

@ -30,402 +30,441 @@ PY3 = bytes != str
generator_wants_static_library_dependencies_adjusted = False
generator_default_variables = {
}
generator_default_variables = {}
for dirname in ['INTERMEDIATE_DIR', 'PRODUCT_DIR', 'LIB_DIR', 'SHARED_LIB_DIR']:
# Some gyp steps fail if these are empty(!), so we convert them to variables
generator_default_variables[dirname] = '$' + dirname
for dirname in ["INTERMEDIATE_DIR", "PRODUCT_DIR", "LIB_DIR", "SHARED_LIB_DIR"]:
# Some gyp steps fail if these are empty(!), so we convert them to variables
generator_default_variables[dirname] = "$" + dirname
for unused in ['RULE_INPUT_PATH', 'RULE_INPUT_ROOT', 'RULE_INPUT_NAME',
'RULE_INPUT_DIRNAME', 'RULE_INPUT_EXT',
'EXECUTABLE_PREFIX', 'EXECUTABLE_SUFFIX',
'STATIC_LIB_PREFIX', 'STATIC_LIB_SUFFIX',
'SHARED_LIB_PREFIX', 'SHARED_LIB_SUFFIX',
'CONFIGURATION_NAME']:
generator_default_variables[unused] = ''
for unused in [
"RULE_INPUT_PATH",
"RULE_INPUT_ROOT",
"RULE_INPUT_NAME",
"RULE_INPUT_DIRNAME",
"RULE_INPUT_EXT",
"EXECUTABLE_PREFIX",
"EXECUTABLE_SUFFIX",
"STATIC_LIB_PREFIX",
"STATIC_LIB_SUFFIX",
"SHARED_LIB_PREFIX",
"SHARED_LIB_SUFFIX",
"CONFIGURATION_NAME",
]:
generator_default_variables[unused] = ""
# Include dirs will occasionally use the SHARED_INTERMEDIATE_DIR variable as
# part of the path when dealing with generated headers. This value will be
# replaced dynamically for each configuration.
generator_default_variables['SHARED_INTERMEDIATE_DIR'] = \
'$SHARED_INTERMEDIATE_DIR'
generator_default_variables["SHARED_INTERMEDIATE_DIR"] = "$SHARED_INTERMEDIATE_DIR"
def CalculateVariables(default_variables, params):
generator_flags = params.get('generator_flags', {})
for key, val in generator_flags.items():
default_variables.setdefault(key, val)
flavor = gyp.common.GetFlavor(params)
default_variables.setdefault('OS', flavor)
if flavor == 'win':
# Copy additional generator configuration data from VS, which is shared
# by the Eclipse generator.
import gyp.generator.msvs as msvs_generator
generator_additional_non_configuration_keys = getattr(msvs_generator,
'generator_additional_non_configuration_keys', [])
generator_additional_path_sections = getattr(msvs_generator,
'generator_additional_path_sections', [])
gyp.msvs_emulation.CalculateCommonVariables(default_variables, params)
generator_flags = params.get("generator_flags", {})
for key, val in generator_flags.items():
default_variables.setdefault(key, val)
flavor = gyp.common.GetFlavor(params)
default_variables.setdefault("OS", flavor)
if flavor == "win":
gyp.msvs_emulation.CalculateCommonVariables(default_variables, params)
def CalculateGeneratorInputInfo(params):
"""Calculate the generator specific info that gets fed to input (called by
"""Calculate the generator specific info that gets fed to input (called by
gyp)."""
generator_flags = params.get('generator_flags', {})
if generator_flags.get('adjust_static_libraries', False):
global generator_wants_static_library_dependencies_adjusted
generator_wants_static_library_dependencies_adjusted = True
generator_flags = params.get("generator_flags", {})
if generator_flags.get("adjust_static_libraries", False):
global generator_wants_static_library_dependencies_adjusted
generator_wants_static_library_dependencies_adjusted = True
def GetAllIncludeDirectories(target_list, target_dicts,
shared_intermediate_dirs, config_name, params,
compiler_path):
"""Calculate the set of include directories to be used.
def GetAllIncludeDirectories(
target_list,
target_dicts,
shared_intermediate_dirs,
config_name,
params,
compiler_path,
):
"""Calculate the set of include directories to be used.
Returns:
A list including all the include_dir's specified for every target followed
by any include directories that were added as cflag compiler options.
"""
gyp_includes_set = set()
compiler_includes_list = []
gyp_includes_set = set()
compiler_includes_list = []
# Find compiler's default include dirs.
if compiler_path:
command = shlex.split(compiler_path)
command.extend(['-E', '-xc++', '-v', '-'])
proc = subprocess.Popen(args=command, stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output = proc.communicate()[1]
if PY3:
output = output.decode('utf-8')
# Extract the list of include dirs from the output, which has this format:
# ...
# #include "..." search starts here:
# #include <...> search starts here:
# /usr/include/c++/4.6
# /usr/local/include
# End of search list.
# ...
in_include_list = False
for line in output.splitlines():
if line.startswith('#include'):
in_include_list = True
continue
if line.startswith('End of search list.'):
break
if in_include_list:
include_dir = line.strip()
if include_dir not in compiler_includes_list:
compiler_includes_list.append(include_dir)
# Find compiler's default include dirs.
if compiler_path:
command = shlex.split(compiler_path)
command.extend(["-E", "-xc++", "-v", "-"])
proc = subprocess.Popen(
args=command,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
output = proc.communicate()[1]
if PY3:
output = output.decode("utf-8")
# Extract the list of include dirs from the output, which has this format:
# ...
# #include "..." search starts here:
# #include <...> search starts here:
# /usr/include/c++/4.6
# /usr/local/include
# End of search list.
# ...
in_include_list = False
for line in output.splitlines():
if line.startswith("#include"):
in_include_list = True
continue
if line.startswith("End of search list."):
break
if in_include_list:
include_dir = line.strip()
if include_dir not in compiler_includes_list:
compiler_includes_list.append(include_dir)
flavor = gyp.common.GetFlavor(params)
if flavor == 'win':
generator_flags = params.get('generator_flags', {})
for target_name in target_list:
target = target_dicts[target_name]
if config_name in target['configurations']:
config = target['configurations'][config_name]
flavor = gyp.common.GetFlavor(params)
if flavor == "win":
generator_flags = params.get("generator_flags", {})
for target_name in target_list:
target = target_dicts[target_name]
if config_name in target["configurations"]:
config = target["configurations"][config_name]
# Look for any include dirs that were explicitly added via cflags. This
# may be done in gyp files to force certain includes to come at the end.
# TODO(jgreenwald): Change the gyp files to not abuse cflags for this, and
# remove this.
if flavor == 'win':
msvs_settings = gyp.msvs_emulation.MsvsSettings(target, generator_flags)
cflags = msvs_settings.GetCflags(config_name)
else:
cflags = config['cflags']
for cflag in cflags:
if cflag.startswith('-I'):
include_dir = cflag[2:]
if include_dir not in compiler_includes_list:
compiler_includes_list.append(include_dir)
# Look for any include dirs that were explicitly added via cflags. This
# may be done in gyp files to force certain includes to come at the end.
# TODO(jgreenwald): Change the gyp files to not abuse cflags for this, and
# remove this.
if flavor == "win":
msvs_settings = gyp.msvs_emulation.MsvsSettings(target, generator_flags)
cflags = msvs_settings.GetCflags(config_name)
else:
cflags = config["cflags"]
for cflag in cflags:
if cflag.startswith("-I"):
include_dir = cflag[2:]
if include_dir not in compiler_includes_list:
compiler_includes_list.append(include_dir)
# Find standard gyp include dirs.
if 'include_dirs' in config:
include_dirs = config['include_dirs']
for shared_intermediate_dir in shared_intermediate_dirs:
for include_dir in include_dirs:
include_dir = include_dir.replace('$SHARED_INTERMEDIATE_DIR',
shared_intermediate_dir)
if not os.path.isabs(include_dir):
base_dir = os.path.dirname(target_name)
# Find standard gyp include dirs.
if "include_dirs" in config:
include_dirs = config["include_dirs"]
for shared_intermediate_dir in shared_intermediate_dirs:
for include_dir in include_dirs:
include_dir = include_dir.replace(
"$SHARED_INTERMEDIATE_DIR", shared_intermediate_dir
)
if not os.path.isabs(include_dir):
base_dir = os.path.dirname(target_name)
include_dir = base_dir + '/' + include_dir
include_dir = os.path.abspath(include_dir)
include_dir = base_dir + "/" + include_dir
include_dir = os.path.abspath(include_dir)
gyp_includes_set.add(include_dir)
gyp_includes_set.add(include_dir)
# Generate a list that has all the include dirs.
all_includes_list = list(gyp_includes_set)
all_includes_list.sort()
for compiler_include in compiler_includes_list:
if not compiler_include in gyp_includes_set:
all_includes_list.append(compiler_include)
# Generate a list that has all the include dirs.
all_includes_list = list(gyp_includes_set)
all_includes_list.sort()
for compiler_include in compiler_includes_list:
if compiler_include not in gyp_includes_set:
all_includes_list.append(compiler_include)
# All done.
return all_includes_list
# All done.
return all_includes_list
def GetCompilerPath(target_list, data, options):
"""Determine a command that can be used to invoke the compiler.
"""Determine a command that can be used to invoke the compiler.
Returns:
If this is a gyp project that has explicit make settings, try to determine
the compiler from that. Otherwise, see if a compiler was specified via the
CC_target environment variable.
"""
# First, see if the compiler is configured in make's settings.
build_file, _, _ = gyp.common.ParseQualifiedTarget(target_list[0])
make_global_settings_dict = data[build_file].get('make_global_settings', {})
for key, value in make_global_settings_dict:
if key in ['CC', 'CXX']:
return os.path.join(options.toplevel_dir, value)
# First, see if the compiler is configured in make's settings.
build_file, _, _ = gyp.common.ParseQualifiedTarget(target_list[0])
make_global_settings_dict = data[build_file].get("make_global_settings", {})
for key, value in make_global_settings_dict:
if key in ["CC", "CXX"]:
return os.path.join(options.toplevel_dir, value)
# Check to see if the compiler was specified as an environment variable.
for key in ['CC_target', 'CC', 'CXX']:
compiler = os.environ.get(key)
if compiler:
return compiler
# Check to see if the compiler was specified as an environment variable.
for key in ["CC_target", "CC", "CXX"]:
compiler = os.environ.get(key)
if compiler:
return compiler
return 'gcc'
return "gcc"
def GetAllDefines(target_list, target_dicts, data, config_name, params,
compiler_path):
"""Calculate the defines for a project.
def GetAllDefines(target_list, target_dicts, data, config_name, params, compiler_path):
"""Calculate the defines for a project.
Returns:
A dict that includes explicit defines declared in gyp files along with all
of the default defines that the compiler uses.
"""
# Get defines declared in the gyp files.
all_defines = {}
flavor = gyp.common.GetFlavor(params)
if flavor == 'win':
generator_flags = params.get('generator_flags', {})
for target_name in target_list:
target = target_dicts[target_name]
# Get defines declared in the gyp files.
all_defines = {}
flavor = gyp.common.GetFlavor(params)
if flavor == "win":
generator_flags = params.get("generator_flags", {})
for target_name in target_list:
target = target_dicts[target_name]
if flavor == 'win':
msvs_settings = gyp.msvs_emulation.MsvsSettings(target, generator_flags)
extra_defines = msvs_settings.GetComputedDefines(config_name)
else:
extra_defines = []
if config_name in target['configurations']:
config = target['configurations'][config_name]
target_defines = config['defines']
else:
target_defines = []
for define in target_defines + extra_defines:
split_define = define.split('=', 1)
if len(split_define) == 1:
split_define.append('1')
if split_define[0].strip() in all_defines:
# Already defined
continue
all_defines[split_define[0].strip()] = split_define[1].strip()
# Get default compiler defines (if possible).
if flavor == 'win':
return all_defines # Default defines already processed in the loop above.
if compiler_path:
command = shlex.split(compiler_path)
command.extend(['-E', '-dM', '-'])
cpp_proc = subprocess.Popen(args=command, cwd='.',
stdin=subprocess.PIPE, stdout=subprocess.PIPE)
cpp_output = cpp_proc.communicate()[0]
if PY3:
cpp_output = cpp_output.decode('utf-8')
cpp_lines = cpp_output.split('\n')
for cpp_line in cpp_lines:
if not cpp_line.strip():
continue
cpp_line_parts = cpp_line.split(' ', 2)
key = cpp_line_parts[1]
if len(cpp_line_parts) >= 3:
val = cpp_line_parts[2]
else:
val = '1'
all_defines[key] = val
if flavor == "win":
msvs_settings = gyp.msvs_emulation.MsvsSettings(target, generator_flags)
extra_defines = msvs_settings.GetComputedDefines(config_name)
else:
extra_defines = []
if config_name in target["configurations"]:
config = target["configurations"][config_name]
target_defines = config["defines"]
else:
target_defines = []
for define in target_defines + extra_defines:
split_define = define.split("=", 1)
if len(split_define) == 1:
split_define.append("1")
if split_define[0].strip() in all_defines:
# Already defined
continue
all_defines[split_define[0].strip()] = split_define[1].strip()
# Get default compiler defines (if possible).
if flavor == "win":
return all_defines # Default defines already processed in the loop above.
if compiler_path:
command = shlex.split(compiler_path)
command.extend(["-E", "-dM", "-"])
cpp_proc = subprocess.Popen(
args=command, cwd=".", stdin=subprocess.PIPE, stdout=subprocess.PIPE
)
cpp_output = cpp_proc.communicate()[0]
if PY3:
cpp_output = cpp_output.decode("utf-8")
cpp_lines = cpp_output.split("\n")
for cpp_line in cpp_lines:
if not cpp_line.strip():
continue
cpp_line_parts = cpp_line.split(" ", 2)
key = cpp_line_parts[1]
if len(cpp_line_parts) >= 3:
val = cpp_line_parts[2]
else:
val = "1"
all_defines[key] = val
return all_defines
return all_defines
def WriteIncludePaths(out, eclipse_langs, include_dirs):
"""Write the includes section of a CDT settings export file."""
"""Write the includes section of a CDT settings export file."""
out.write(' <section name="org.eclipse.cdt.internal.ui.wizards.' \
'settingswizards.IncludePaths">\n')
out.write(' <language name="holder for library settings"></language>\n')
for lang in eclipse_langs:
out.write(' <language name="%s">\n' % lang)
for include_dir in include_dirs:
out.write(' <includepath workspace_path="false">%s</includepath>\n' %
include_dir)
out.write(' </language>\n')
out.write(' </section>\n')
out.write(
' <section name="org.eclipse.cdt.internal.ui.wizards.'
'settingswizards.IncludePaths">\n'
)
out.write(' <language name="holder for library settings"></language>\n')
for lang in eclipse_langs:
out.write(' <language name="%s">\n' % lang)
for include_dir in include_dirs:
out.write(
' <includepath workspace_path="false">%s</includepath>\n'
% include_dir
)
out.write(" </language>\n")
out.write(" </section>\n")
def WriteMacros(out, eclipse_langs, defines):
"""Write the macros section of a CDT settings export file."""
"""Write the macros section of a CDT settings export file."""
out.write(' <section name="org.eclipse.cdt.internal.ui.wizards.' \
'settingswizards.Macros">\n')
out.write(' <language name="holder for library settings"></language>\n')
for lang in eclipse_langs:
out.write(' <language name="%s">\n' % lang)
for key in sorted(defines):
out.write(' <macro><name>%s</name><value>%s</value></macro>\n' %
(escape(key), escape(defines[key])))
out.write(' </language>\n')
out.write(' </section>\n')
out.write(
' <section name="org.eclipse.cdt.internal.ui.wizards.'
'settingswizards.Macros">\n'
)
out.write(' <language name="holder for library settings"></language>\n')
for lang in eclipse_langs:
out.write(' <language name="%s">\n' % lang)
for key in sorted(defines):
out.write(
" <macro><name>%s</name><value>%s</value></macro>\n"
% (escape(key), escape(defines[key]))
)
out.write(" </language>\n")
out.write(" </section>\n")
def GenerateOutputForConfig(target_list, target_dicts, data, params,
config_name):
options = params['options']
generator_flags = params.get('generator_flags', {})
def GenerateOutputForConfig(target_list, target_dicts, data, params, config_name):
options = params["options"]
generator_flags = params.get("generator_flags", {})
# build_dir: relative path from source root to our output files.
# e.g. "out/Debug"
build_dir = os.path.join(generator_flags.get('output_dir', 'out'),
config_name)
# build_dir: relative path from source root to our output files.
# e.g. "out/Debug"
build_dir = os.path.join(generator_flags.get("output_dir", "out"), config_name)
toplevel_build = os.path.join(options.toplevel_dir, build_dir)
# Ninja uses out/Debug/gen while make uses out/Debug/obj/gen as the
# SHARED_INTERMEDIATE_DIR. Include both possible locations.
shared_intermediate_dirs = [os.path.join(toplevel_build, 'obj', 'gen'),
os.path.join(toplevel_build, 'gen')]
toplevel_build = os.path.join(options.toplevel_dir, build_dir)
# Ninja uses out/Debug/gen while make uses out/Debug/obj/gen as the
# SHARED_INTERMEDIATE_DIR. Include both possible locations.
shared_intermediate_dirs = [
os.path.join(toplevel_build, "obj", "gen"),
os.path.join(toplevel_build, "gen"),
]
GenerateCdtSettingsFile(target_list,
target_dicts,
data,
params,
config_name,
os.path.join(toplevel_build,
'eclipse-cdt-settings.xml'),
options,
shared_intermediate_dirs)
GenerateClasspathFile(target_list,
target_dicts,
options.toplevel_dir,
toplevel_build,
os.path.join(toplevel_build,
'eclipse-classpath.xml'))
GenerateCdtSettingsFile(
target_list,
target_dicts,
data,
params,
config_name,
os.path.join(toplevel_build, "eclipse-cdt-settings.xml"),
options,
shared_intermediate_dirs,
)
GenerateClasspathFile(
target_list,
target_dicts,
options.toplevel_dir,
toplevel_build,
os.path.join(toplevel_build, "eclipse-classpath.xml"),
)
def GenerateCdtSettingsFile(target_list, target_dicts, data, params,
config_name, out_name, options,
shared_intermediate_dirs):
gyp.common.EnsureDirExists(out_name)
with open(out_name, 'w') as out:
out.write('<?xml version="1.0" encoding="UTF-8"?>\n')
out.write('<cdtprojectproperties>\n')
def GenerateCdtSettingsFile(
target_list,
target_dicts,
data,
params,
config_name,
out_name,
options,
shared_intermediate_dirs,
):
gyp.common.EnsureDirExists(out_name)
with open(out_name, "w") as out:
out.write('<?xml version="1.0" encoding="UTF-8"?>\n')
out.write("<cdtprojectproperties>\n")
eclipse_langs = ['C++ Source File', 'C Source File', 'Assembly Source File',
'GNU C++', 'GNU C', 'Assembly']
compiler_path = GetCompilerPath(target_list, data, options)
include_dirs = GetAllIncludeDirectories(target_list, target_dicts,
shared_intermediate_dirs,
config_name, params, compiler_path)
WriteIncludePaths(out, eclipse_langs, include_dirs)
defines = GetAllDefines(target_list, target_dicts, data, config_name,
params, compiler_path)
WriteMacros(out, eclipse_langs, defines)
eclipse_langs = [
"C++ Source File",
"C Source File",
"Assembly Source File",
"GNU C++",
"GNU C",
"Assembly",
]
compiler_path = GetCompilerPath(target_list, data, options)
include_dirs = GetAllIncludeDirectories(
target_list,
target_dicts,
shared_intermediate_dirs,
config_name,
params,
compiler_path,
)
WriteIncludePaths(out, eclipse_langs, include_dirs)
defines = GetAllDefines(
target_list, target_dicts, data, config_name, params, compiler_path
)
WriteMacros(out, eclipse_langs, defines)
out.write('</cdtprojectproperties>\n')
out.write("</cdtprojectproperties>\n")
def GenerateClasspathFile(target_list, target_dicts, toplevel_dir,
toplevel_build, out_name):
'''Generates a classpath file suitable for symbol navigation and code
def GenerateClasspathFile(
target_list, target_dicts, toplevel_dir, toplevel_build, out_name
):
"""Generates a classpath file suitable for symbol navigation and code
completion of Java code (such as in Android projects) by finding all
.java and .jar files used as action inputs.'''
gyp.common.EnsureDirExists(out_name)
result = ET.Element('classpath')
.java and .jar files used as action inputs."""
gyp.common.EnsureDirExists(out_name)
result = ET.Element("classpath")
def AddElements(kind, paths):
# First, we need to normalize the paths so they are all relative to the
# toplevel dir.
rel_paths = set()
for path in paths:
if os.path.isabs(path):
rel_paths.add(os.path.relpath(path, toplevel_dir))
else:
rel_paths.add(path)
def AddElements(kind, paths):
# First, we need to normalize the paths so they are all relative to the
# toplevel dir.
rel_paths = set()
for path in paths:
if os.path.isabs(path):
rel_paths.add(os.path.relpath(path, toplevel_dir))
else:
rel_paths.add(path)
for path in sorted(rel_paths):
entry_element = ET.SubElement(result, 'classpathentry')
entry_element.set('kind', kind)
entry_element.set('path', path)
for path in sorted(rel_paths):
entry_element = ET.SubElement(result, "classpathentry")
entry_element.set("kind", kind)
entry_element.set("path", path)
AddElements('lib', GetJavaJars(target_list, target_dicts, toplevel_dir))
AddElements('src', GetJavaSourceDirs(target_list, target_dicts, toplevel_dir))
# Include the standard JRE container and a dummy out folder
AddElements('con', ['org.eclipse.jdt.launching.JRE_CONTAINER'])
# Include a dummy out folder so that Eclipse doesn't use the default /bin
# folder in the root of the project.
AddElements('output', [os.path.join(toplevel_build, '.eclipse-java-build')])
AddElements("lib", GetJavaJars(target_list, target_dicts, toplevel_dir))
AddElements("src", GetJavaSourceDirs(target_list, target_dicts, toplevel_dir))
# Include the standard JRE container and a dummy out folder
AddElements("con", ["org.eclipse.jdt.launching.JRE_CONTAINER"])
# Include a dummy out folder so that Eclipse doesn't use the default /bin
# folder in the root of the project.
AddElements("output", [os.path.join(toplevel_build, ".eclipse-java-build")])
ET.ElementTree(result).write(out_name)
ET.ElementTree(result).write(out_name)
def GetJavaJars(target_list, target_dicts, toplevel_dir):
'''Generates a sequence of all .jars used as inputs.'''
for target_name in target_list:
target = target_dicts[target_name]
for action in target.get('actions', []):
for input_ in action['inputs']:
if os.path.splitext(input_)[1] == '.jar' and not input_.startswith('$'):
if os.path.isabs(input_):
yield input_
else:
yield os.path.join(os.path.dirname(target_name), input_)
"""Generates a sequence of all .jars used as inputs."""
for target_name in target_list:
target = target_dicts[target_name]
for action in target.get("actions", []):
for input_ in action["inputs"]:
if os.path.splitext(input_)[1] == ".jar" and not input_.startswith("$"):
if os.path.isabs(input_):
yield input_
else:
yield os.path.join(os.path.dirname(target_name), input_)
def GetJavaSourceDirs(target_list, target_dicts, toplevel_dir):
'''Generates a sequence of all likely java package root directories.'''
for target_name in target_list:
target = target_dicts[target_name]
for action in target.get('actions', []):
for input_ in action['inputs']:
if (os.path.splitext(input_)[1] == '.java' and
not input_.startswith('$')):
dir_ = os.path.dirname(os.path.join(os.path.dirname(target_name),
input_))
# If there is a parent 'src' or 'java' folder, navigate up to it -
# these are canonical package root names in Chromium. This will
# break if 'src' or 'java' exists in the package structure. This
# could be further improved by inspecting the java file for the
# package name if this proves to be too fragile in practice.
parent_search = dir_
while os.path.basename(parent_search) not in ['src', 'java']:
parent_search, _ = os.path.split(parent_search)
if not parent_search or parent_search == toplevel_dir:
# Didn't find a known root, just return the original path
yield dir_
break
else:
yield parent_search
"""Generates a sequence of all likely java package root directories."""
for target_name in target_list:
target = target_dicts[target_name]
for action in target.get("actions", []):
for input_ in action["inputs"]:
if os.path.splitext(input_)[1] == ".java" and not input_.startswith(
"$"
):
dir_ = os.path.dirname(
os.path.join(os.path.dirname(target_name), input_)
)
# If there is a parent 'src' or 'java' folder, navigate up to it -
# these are canonical package root names in Chromium. This will
# break if 'src' or 'java' exists in the package structure. This
# could be further improved by inspecting the java file for the
# package name if this proves to be too fragile in practice.
parent_search = dir_
while os.path.basename(parent_search) not in ["src", "java"]:
parent_search, _ = os.path.split(parent_search)
if not parent_search or parent_search == toplevel_dir:
# Didn't find a known root, just return the original path
yield dir_
break
else:
yield parent_search
def GenerateOutput(target_list, target_dicts, data, params):
"""Generate an XML settings file that can be imported into a CDT project."""
"""Generate an XML settings file that can be imported into a CDT project."""
if params['options'].generator_output:
raise NotImplementedError("--generator_output not implemented for eclipse")
user_config = params.get('generator_flags', {}).get('config', None)
if user_config:
GenerateOutputForConfig(target_list, target_dicts, data, params,
user_config)
else:
config_names = target_dicts[target_list[0]]['configurations'].keys()
for config_name in config_names:
GenerateOutputForConfig(target_list, target_dicts, data, params,
config_name)
if params["options"].generator_output:
raise NotImplementedError("--generator_output not implemented for eclipse")
user_config = params.get("generator_flags", {}).get("config", None)
if user_config:
GenerateOutputForConfig(target_list, target_dicts, data, params, user_config)
else:
config_names = target_dicts[target_list[0]]["configurations"]
for config_name in config_names:
GenerateOutputForConfig(
target_list, target_dicts, data, params, config_name
)

View file

@ -32,36 +32,33 @@ to change.
import gyp.common
import errno
import os
import pprint
# These variables should just be spit back out as variable references.
_generator_identity_variables = [
'CONFIGURATION_NAME',
'EXECUTABLE_PREFIX',
'EXECUTABLE_SUFFIX',
'INTERMEDIATE_DIR',
'LIB_DIR',
'PRODUCT_DIR',
'RULE_INPUT_ROOT',
'RULE_INPUT_DIRNAME',
'RULE_INPUT_EXT',
'RULE_INPUT_NAME',
'RULE_INPUT_PATH',
'SHARED_INTERMEDIATE_DIR',
'SHARED_LIB_DIR',
'SHARED_LIB_PREFIX',
'SHARED_LIB_SUFFIX',
'STATIC_LIB_PREFIX',
'STATIC_LIB_SUFFIX',
"CONFIGURATION_NAME",
"EXECUTABLE_PREFIX",
"EXECUTABLE_SUFFIX",
"INTERMEDIATE_DIR",
"LIB_DIR",
"PRODUCT_DIR",
"RULE_INPUT_ROOT",
"RULE_INPUT_DIRNAME",
"RULE_INPUT_EXT",
"RULE_INPUT_NAME",
"RULE_INPUT_PATH",
"SHARED_INTERMEDIATE_DIR",
"SHARED_LIB_DIR",
"SHARED_LIB_PREFIX",
"SHARED_LIB_SUFFIX",
"STATIC_LIB_PREFIX",
"STATIC_LIB_SUFFIX",
]
# gypd doesn't define a default value for OS like many other generator
# modules. Specify "-D OS=whatever" on the command line to provide a value.
generator_default_variables = {
}
generator_default_variables = {}
# gypd supports multiple toolsets
generator_supports_multiple_toolsets = True
@ -71,24 +68,22 @@ generator_supports_multiple_toolsets = True
# module should use < for the early phase and then switch to > for the late
# phase. Bonus points for carrying @ back into the output too.
for v in _generator_identity_variables:
generator_default_variables[v] = '<(%s)' % v
generator_default_variables[v] = "<(%s)" % v
def GenerateOutput(target_list, target_dicts, data, params):
output_files = {}
for qualified_target in target_list:
[input_file, target] = \
gyp.common.ParseQualifiedTarget(qualified_target)[0:2]
output_files = {}
for qualified_target in target_list:
[input_file, target] = gyp.common.ParseQualifiedTarget(qualified_target)[0:2]
if input_file[-4:] != '.gyp':
continue
input_file_stem = input_file[:-4]
output_file = input_file_stem + params['options'].suffix + '.gypd'
if input_file[-4:] != ".gyp":
continue
input_file_stem = input_file[:-4]
output_file = input_file_stem + params["options"].suffix + ".gypd"
if not output_file in output_files:
output_files[output_file] = input_file
output_files[output_file] = output_files.get(output_file, input_file)
for output_file, input_file in output_files.items():
output = open(output_file, 'w')
pprint.pprint(data[input_file], output)
output.close()
for output_file, input_file in output_files.items():
output = open(output_file, "w")
pprint.pprint(data[input_file], output)
output.close()

View file

@ -21,36 +21,38 @@ import sys
# All of this stuff about generator variables was lovingly ripped from gypd.py.
# That module has a much better description of what's going on and why.
_generator_identity_variables = [
'EXECUTABLE_PREFIX',
'EXECUTABLE_SUFFIX',
'INTERMEDIATE_DIR',
'PRODUCT_DIR',
'RULE_INPUT_ROOT',
'RULE_INPUT_DIRNAME',
'RULE_INPUT_EXT',
'RULE_INPUT_NAME',
'RULE_INPUT_PATH',
'SHARED_INTERMEDIATE_DIR',
"EXECUTABLE_PREFIX",
"EXECUTABLE_SUFFIX",
"INTERMEDIATE_DIR",
"PRODUCT_DIR",
"RULE_INPUT_ROOT",
"RULE_INPUT_DIRNAME",
"RULE_INPUT_EXT",
"RULE_INPUT_NAME",
"RULE_INPUT_PATH",
"SHARED_INTERMEDIATE_DIR",
]
generator_default_variables = {
}
generator_default_variables = {}
for v in _generator_identity_variables:
generator_default_variables[v] = '<(%s)' % v
generator_default_variables[v] = "<(%s)" % v
def GenerateOutput(target_list, target_dicts, data, params):
locals = {
'target_list': target_list,
'target_dicts': target_dicts,
'data': data,
}
locals = {
"target_list": target_list,
"target_dicts": target_dicts,
"data": data,
}
# Use a banner that looks like the stock Python one and like what
# code.interact uses by default, but tack on something to indicate what
# locals are available, and identify gypsh.
banner='Python %s on %s\nlocals.keys() = %s\ngypsh' % \
(sys.version, sys.platform, repr(sorted(locals.keys())))
# Use a banner that looks like the stock Python one and like what
# code.interact uses by default, but tack on something to indicate what
# locals are available, and identify gypsh.
banner = "Python %s on %s\nlocals.keys() = %s\ngypsh" % (
sys.version,
sys.platform,
repr(sorted(locals.keys())),
)
code.interact(banner, local=locals)
code.interact(banner, local=locals)

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -9,33 +9,39 @@ import gyp.generator.msvs as msvs
import unittest
try:
from StringIO import StringIO # Python 2
from StringIO import StringIO # Python 2
except ImportError:
from io import StringIO # Python 3
from io import StringIO # Python 3
class TestSequenceFunctions(unittest.TestCase):
def setUp(self):
self.stderr = StringIO()
def setUp(self):
self.stderr = StringIO()
def test_GetLibraries(self):
self.assertEqual(msvs._GetLibraries({}), [])
self.assertEqual(msvs._GetLibraries({"libraries": []}), [])
self.assertEqual(
msvs._GetLibraries({"other": "foo", "libraries": ["a.lib"]}), ["a.lib"]
)
self.assertEqual(msvs._GetLibraries({"libraries": ["-la"]}), ["a.lib"])
self.assertEqual(
msvs._GetLibraries(
{
"libraries": [
"a.lib",
"b.lib",
"c.lib",
"-lb.lib",
"-lb.lib",
"d.lib",
"a.lib",
]
}
),
["c.lib", "b.lib", "d.lib", "a.lib"],
)
def test_GetLibraries(self):
self.assertEqual(
msvs._GetLibraries({}),
[])
self.assertEqual(
msvs._GetLibraries({'libraries': []}),
[])
self.assertEqual(
msvs._GetLibraries({'other':'foo', 'libraries': ['a.lib']}),
['a.lib'])
self.assertEqual(
msvs._GetLibraries({'libraries': ['-la']}),
['a.lib'])
self.assertEqual(
msvs._GetLibraries({'libraries': ['a.lib', 'b.lib', 'c.lib', '-lb.lib',
'-lb.lib', 'd.lib', 'a.lib']}),
['c.lib', 'b.lib', 'd.lib', 'a.lib'])
if __name__ == '__main__':
unittest.main()
if __name__ == "__main__":
unittest.main()

File diff suppressed because it is too large Load diff

View file

@ -13,34 +13,43 @@ import gyp.generator.ninja as ninja
class TestPrefixesAndSuffixes(unittest.TestCase):
def test_BinaryNamesWindows(self):
# These cannot run on non-Windows as they require a VS installation to
# correctly handle variable expansion.
if sys.platform.startswith('win'):
writer = ninja.NinjaWriter('foo', 'wee', '.', '.', 'build.ninja', '.',
'build.ninja', 'win')
spec = { 'target_name': 'wee' }
self.assertTrue(writer.ComputeOutputFileName(spec, 'executable').
endswith('.exe'))
self.assertTrue(writer.ComputeOutputFileName(spec, 'shared_library').
endswith('.dll'))
self.assertTrue(writer.ComputeOutputFileName(spec, 'static_library').
endswith('.lib'))
def test_BinaryNamesWindows(self):
# These cannot run on non-Windows as they require a VS installation to
# correctly handle variable expansion.
if sys.platform.startswith("win"):
writer = ninja.NinjaWriter(
"foo", "wee", ".", ".", "build.ninja", ".", "build.ninja", "win"
)
spec = {"target_name": "wee"}
self.assertTrue(
writer.ComputeOutputFileName(spec, "executable").endswith(".exe")
)
self.assertTrue(
writer.ComputeOutputFileName(spec, "shared_library").endswith(".dll")
)
self.assertTrue(
writer.ComputeOutputFileName(spec, "static_library").endswith(".lib")
)
def test_BinaryNamesLinux(self):
writer = ninja.NinjaWriter('foo', 'wee', '.', '.', 'build.ninja', '.',
'build.ninja', 'linux')
spec = { 'target_name': 'wee' }
self.assertTrue('.' not in writer.ComputeOutputFileName(spec,
'executable'))
self.assertTrue(writer.ComputeOutputFileName(spec, 'shared_library').
startswith('lib'))
self.assertTrue(writer.ComputeOutputFileName(spec, 'static_library').
startswith('lib'))
self.assertTrue(writer.ComputeOutputFileName(spec, 'shared_library').
endswith('.so'))
self.assertTrue(writer.ComputeOutputFileName(spec, 'static_library').
endswith('.a'))
def test_BinaryNamesLinux(self):
writer = ninja.NinjaWriter(
"foo", "wee", ".", ".", "build.ninja", ".", "build.ninja", "linux"
)
spec = {"target_name": "wee"}
self.assertTrue("." not in writer.ComputeOutputFileName(spec, "executable"))
self.assertTrue(
writer.ComputeOutputFileName(spec, "shared_library").startswith("lib")
)
self.assertTrue(
writer.ComputeOutputFileName(spec, "static_library").startswith("lib")
)
self.assertTrue(
writer.ComputeOutputFileName(spec, "shared_library").endswith(".so")
)
self.assertTrue(
writer.ComputeOutputFileName(spec, "static_library").endswith(".a")
)
if __name__ == '__main__':
unittest.main()
if __name__ == "__main__":
unittest.main()

File diff suppressed because it is too large Load diff

View file

@ -12,12 +12,14 @@ import sys
class TestEscapeXcodeDefine(unittest.TestCase):
if sys.platform == 'darwin':
def test_InheritedRemainsUnescaped(self):
self.assertEqual(xcode.EscapeXcodeDefine('$(inherited)'), '$(inherited)')
if sys.platform == "darwin":
def test_Escaping(self):
self.assertEqual(xcode.EscapeXcodeDefine('a b"c\\'), 'a\\ b\\"c\\\\')
def test_InheritedRemainsUnescaped(self):
self.assertEqual(xcode.EscapeXcodeDefine("$(inherited)"), "$(inherited)")
if __name__ == '__main__':
unittest.main()
def test_Escaping(self):
self.assertEqual(xcode.EscapeXcodeDefine('a b"c\\'), 'a\\ b\\"c\\\\')
if __name__ == "__main__":
unittest.main()

File diff suppressed because it is too large Load diff

View file

@ -8,83 +8,91 @@
import gyp.input
import unittest
import sys
class TestFindCycles(unittest.TestCase):
def setUp(self):
self.nodes = {}
for x in ('a', 'b', 'c', 'd', 'e'):
self.nodes[x] = gyp.input.DependencyGraphNode(x)
def setUp(self):
self.nodes = {}
for x in ("a", "b", "c", "d", "e"):
self.nodes[x] = gyp.input.DependencyGraphNode(x)
def _create_dependency(self, dependent, dependency):
dependent.dependencies.append(dependency)
dependency.dependents.append(dependent)
def _create_dependency(self, dependent, dependency):
dependent.dependencies.append(dependency)
dependency.dependents.append(dependent)
def test_no_cycle_empty_graph(self):
for label, node in self.nodes.items():
self.assertEqual([], node.FindCycles())
def test_no_cycle_empty_graph(self):
for label, node in self.nodes.items():
self.assertEqual([], node.FindCycles())
def test_no_cycle_line(self):
self._create_dependency(self.nodes['a'], self.nodes['b'])
self._create_dependency(self.nodes['b'], self.nodes['c'])
self._create_dependency(self.nodes['c'], self.nodes['d'])
def test_no_cycle_line(self):
self._create_dependency(self.nodes["a"], self.nodes["b"])
self._create_dependency(self.nodes["b"], self.nodes["c"])
self._create_dependency(self.nodes["c"], self.nodes["d"])
for label, node in self.nodes.items():
self.assertEqual([], node.FindCycles())
for label, node in self.nodes.items():
self.assertEqual([], node.FindCycles())
def test_no_cycle_dag(self):
self._create_dependency(self.nodes['a'], self.nodes['b'])
self._create_dependency(self.nodes['a'], self.nodes['c'])
self._create_dependency(self.nodes['b'], self.nodes['c'])
def test_no_cycle_dag(self):
self._create_dependency(self.nodes["a"], self.nodes["b"])
self._create_dependency(self.nodes["a"], self.nodes["c"])
self._create_dependency(self.nodes["b"], self.nodes["c"])
for label, node in self.nodes.items():
self.assertEqual([], node.FindCycles())
for label, node in self.nodes.items():
self.assertEqual([], node.FindCycles())
def test_cycle_self_reference(self):
self._create_dependency(self.nodes['a'], self.nodes['a'])
def test_cycle_self_reference(self):
self._create_dependency(self.nodes["a"], self.nodes["a"])
self.assertEqual([[self.nodes['a'], self.nodes['a']]],
self.nodes['a'].FindCycles())
self.assertEqual(
[[self.nodes["a"], self.nodes["a"]]], self.nodes["a"].FindCycles()
)
def test_cycle_two_nodes(self):
self._create_dependency(self.nodes['a'], self.nodes['b'])
self._create_dependency(self.nodes['b'], self.nodes['a'])
def test_cycle_two_nodes(self):
self._create_dependency(self.nodes["a"], self.nodes["b"])
self._create_dependency(self.nodes["b"], self.nodes["a"])
self.assertEqual([[self.nodes['a'], self.nodes['b'], self.nodes['a']]],
self.nodes['a'].FindCycles())
self.assertEqual([[self.nodes['b'], self.nodes['a'], self.nodes['b']]],
self.nodes['b'].FindCycles())
self.assertEqual(
[[self.nodes["a"], self.nodes["b"], self.nodes["a"]]],
self.nodes["a"].FindCycles(),
)
self.assertEqual(
[[self.nodes["b"], self.nodes["a"], self.nodes["b"]]],
self.nodes["b"].FindCycles(),
)
def test_two_cycles(self):
self._create_dependency(self.nodes['a'], self.nodes['b'])
self._create_dependency(self.nodes['b'], self.nodes['a'])
def test_two_cycles(self):
self._create_dependency(self.nodes["a"], self.nodes["b"])
self._create_dependency(self.nodes["b"], self.nodes["a"])
self._create_dependency(self.nodes['b'], self.nodes['c'])
self._create_dependency(self.nodes['c'], self.nodes['b'])
self._create_dependency(self.nodes["b"], self.nodes["c"])
self._create_dependency(self.nodes["c"], self.nodes["b"])
cycles = self.nodes['a'].FindCycles()
self.assertTrue(
[self.nodes['a'], self.nodes['b'], self.nodes['a']] in cycles)
self.assertTrue(
[self.nodes['b'], self.nodes['c'], self.nodes['b']] in cycles)
self.assertEqual(2, len(cycles))
cycles = self.nodes["a"].FindCycles()
self.assertTrue([self.nodes["a"], self.nodes["b"], self.nodes["a"]] in cycles)
self.assertTrue([self.nodes["b"], self.nodes["c"], self.nodes["b"]] in cycles)
self.assertEqual(2, len(cycles))
def test_big_cycle(self):
self._create_dependency(self.nodes['a'], self.nodes['b'])
self._create_dependency(self.nodes['b'], self.nodes['c'])
self._create_dependency(self.nodes['c'], self.nodes['d'])
self._create_dependency(self.nodes['d'], self.nodes['e'])
self._create_dependency(self.nodes['e'], self.nodes['a'])
def test_big_cycle(self):
self._create_dependency(self.nodes["a"], self.nodes["b"])
self._create_dependency(self.nodes["b"], self.nodes["c"])
self._create_dependency(self.nodes["c"], self.nodes["d"])
self._create_dependency(self.nodes["d"], self.nodes["e"])
self._create_dependency(self.nodes["e"], self.nodes["a"])
self.assertEqual([[self.nodes['a'],
self.nodes['b'],
self.nodes['c'],
self.nodes['d'],
self.nodes['e'],
self.nodes['a']]],
self.nodes['a'].FindCycles())
self.assertEqual(
[
[
self.nodes["a"],
self.nodes["b"],
self.nodes["c"],
self.nodes["d"],
self.nodes["e"],
self.nodes["a"],
]
],
self.nodes["a"].FindCycles(),
)
if __name__ == '__main__':
unittest.main()
if __name__ == "__main__":
unittest.main()

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -10,10 +10,11 @@ use Python.
"""
import textwrap
import re
def escape_path(word):
return word.replace('$ ','$$ ').replace(' ','$ ').replace(':', '$:')
return word.replace("$ ", "$$ ").replace(" ", "$ ").replace(":", "$:")
class Writer(object):
def __init__(self, output, width=78):
@ -21,47 +22,58 @@ class Writer(object):
self.width = width
def newline(self):
self.output.write('\n')
self.output.write("\n")
def comment(self, text):
for line in textwrap.wrap(text, self.width - 2):
self.output.write('# ' + line + '\n')
self.output.write("# " + line + "\n")
def variable(self, key, value, indent=0):
if value is None:
return
if isinstance(value, list):
value = ' '.join(filter(None, value)) # Filter out empty strings.
self._line('%s = %s' % (key, value), indent)
value = " ".join(filter(None, value)) # Filter out empty strings.
self._line("%s = %s" % (key, value), indent)
def pool(self, name, depth):
self._line('pool %s' % name)
self.variable('depth', depth, indent=1)
self._line("pool %s" % name)
self.variable("depth", depth, indent=1)
def rule(self, name, command, description=None, depfile=None,
generator=False, pool=None, restat=False, rspfile=None,
rspfile_content=None, deps=None):
self._line('rule %s' % name)
self.variable('command', command, indent=1)
def rule(
self,
name,
command,
description=None,
depfile=None,
generator=False,
pool=None,
restat=False,
rspfile=None,
rspfile_content=None,
deps=None,
):
self._line("rule %s" % name)
self.variable("command", command, indent=1)
if description:
self.variable('description', description, indent=1)
self.variable("description", description, indent=1)
if depfile:
self.variable('depfile', depfile, indent=1)
self.variable("depfile", depfile, indent=1)
if generator:
self.variable('generator', '1', indent=1)
self.variable("generator", "1", indent=1)
if pool:
self.variable('pool', pool, indent=1)
self.variable("pool", pool, indent=1)
if restat:
self.variable('restat', '1', indent=1)
self.variable("restat", "1", indent=1)
if rspfile:
self.variable('rspfile', rspfile, indent=1)
self.variable("rspfile", rspfile, indent=1)
if rspfile_content:
self.variable('rspfile_content', rspfile_content, indent=1)
self.variable("rspfile_content", rspfile_content, indent=1)
if deps:
self.variable('deps', deps, indent=1)
self.variable("deps", deps, indent=1)
def build(self, outputs, rule, inputs=None, implicit=None, order_only=None,
variables=None):
def build(
self, outputs, rule, inputs=None, implicit=None, order_only=None, variables=None
):
outputs = self._as_list(outputs)
all_inputs = self._as_list(inputs)[:]
out_outputs = list(map(escape_path, outputs))
@ -69,15 +81,16 @@ class Writer(object):
if implicit:
implicit = map(escape_path, self._as_list(implicit))
all_inputs.append('|')
all_inputs.append("|")
all_inputs.extend(implicit)
if order_only:
order_only = map(escape_path, self._as_list(order_only))
all_inputs.append('||')
all_inputs.append("||")
all_inputs.extend(order_only)
self._line('build %s: %s' % (' '.join(out_outputs),
' '.join([rule] + all_inputs)))
self._line(
"build %s: %s" % (" ".join(out_outputs), " ".join([rule] + all_inputs))
)
if variables:
if isinstance(variables, dict):
@ -91,58 +104,59 @@ class Writer(object):
return outputs
def include(self, path):
self._line('include %s' % path)
self._line("include %s" % path)
def subninja(self, path):
self._line('subninja %s' % path)
self._line("subninja %s" % path)
def default(self, paths):
self._line('default %s' % ' '.join(self._as_list(paths)))
self._line("default %s" % " ".join(self._as_list(paths)))
def _count_dollars_before_index(self, s, i):
"""Returns the number of '$' characters right in front of s[i]."""
dollar_count = 0
dollar_index = i - 1
while dollar_index > 0 and s[dollar_index] == '$':
dollar_count += 1
dollar_index -= 1
return dollar_count
"""Returns the number of '$' characters right in front of s[i]."""
dollar_count = 0
dollar_index = i - 1
while dollar_index > 0 and s[dollar_index] == "$":
dollar_count += 1
dollar_index -= 1
return dollar_count
def _line(self, text, indent=0):
"""Write 'text' word-wrapped at self.width characters."""
leading_space = ' ' * indent
leading_space = " " * indent
while len(leading_space) + len(text) > self.width:
# The text is too wide; wrap if possible.
# Find the rightmost space that would obey our width constraint and
# that's not an escaped space.
available_space = self.width - len(leading_space) - len(' $')
available_space = self.width - len(leading_space) - len(" $")
space = available_space
while True:
space = text.rfind(' ', 0, space)
if space < 0 or \
self._count_dollars_before_index(text, space) % 2 == 0:
break
space = text.rfind(" ", 0, space)
if space < 0 or self._count_dollars_before_index(text, space) % 2 == 0:
break
if space < 0:
# No such space; just use the first unescaped space we can find.
space = available_space - 1
while True:
space = text.find(' ', space + 1)
if space < 0 or \
self._count_dollars_before_index(text, space) % 2 == 0:
break
space = text.find(" ", space + 1)
if (
space < 0
or self._count_dollars_before_index(text, space) % 2 == 0
):
break
if space < 0:
# Give up on breaking.
break
self.output.write(leading_space + text[0:space] + ' $\n')
text = text[space+1:]
self.output.write(leading_space + text[0:space] + " $\n")
text = text[space + 1 :]
# Subsequent lines are continuations, so indent them.
leading_space = ' ' * (indent+2)
leading_space = " " * (indent + 2)
self.output.write(leading_space + text + '\n')
self.output.write(leading_space + text + "\n")
def _as_list(self, input):
if input is None:
@ -155,6 +169,6 @@ class Writer(object):
def escape(string):
"""Escape a string such that it can be embedded into a Ninja file without
further interpretation."""
assert '\n' not in string, 'Ninja syntax does not allow newlines'
assert "\n" not in string, "Ninja syntax does not allow newlines"
# We only have one special metacharacter: '$'.
return string.replace('$', '$$')
return string.replace("$", "$$")

View file

@ -7,44 +7,58 @@ structures or complex types except for dicts and lists. This is
because gyp copies so large structure that small copy overhead ends up
taking seconds in a project the size of Chromium."""
class Error(Exception):
pass
pass
__all__ = ["Error", "deepcopy"]
def deepcopy(x):
"""Deep copy operation on gyp objects such as strings, ints, dicts
"""Deep copy operation on gyp objects such as strings, ints, dicts
and lists. More than twice as fast as copy.deepcopy but much less
generic."""
try:
return _deepcopy_dispatch[type(x)](x)
except KeyError:
raise Error('Unsupported type %s for deepcopy. Use copy.deepcopy ' +
'or expand simple_copy support.' % type(x))
try:
return _deepcopy_dispatch[type(x)](x)
except KeyError:
raise Error(
"Unsupported type %s for deepcopy. Use copy.deepcopy "
+ "or expand simple_copy support." % type(x)
)
_deepcopy_dispatch = d = {}
def _deepcopy_atomic(x):
return x
return x
try:
types = bool, float, int, str, type, type(None), long, unicode
types = bool, float, int, str, type, type(None), long, unicode
except NameError: # Python 3
types = bool, float, int, str, type, type(None)
types = bool, float, int, str, type, type(None)
for x in types:
d[x] = _deepcopy_atomic
d[x] = _deepcopy_atomic
def _deepcopy_list(x):
return [deepcopy(a) for a in x]
return [deepcopy(a) for a in x]
d[list] = _deepcopy_list
def _deepcopy_dict(x):
y = {}
for key, value in x.items():
y[deepcopy(key)] = deepcopy(value)
return y
y = {}
for key, value in x.items():
y[deepcopy(key)] = deepcopy(value)
return y
d[dict] = _deepcopy_dict
del d

View file

@ -24,312 +24,362 @@ PY3 = bytes != str
# A regex matching an argument corresponding to the output filename passed to
# link.exe.
_LINK_EXE_OUT_ARG = re.compile('/OUT:(?P<out>.+)$', re.IGNORECASE)
_LINK_EXE_OUT_ARG = re.compile("/OUT:(?P<out>.+)$", re.IGNORECASE)
def main(args):
executor = WinTool()
exit_code = executor.Dispatch(args)
if exit_code is not None:
sys.exit(exit_code)
executor = WinTool()
exit_code = executor.Dispatch(args)
if exit_code is not None:
sys.exit(exit_code)
class WinTool(object):
"""This class performs all the Windows tooling steps. The methods can either
"""This class performs all the Windows tooling steps. The methods can either
be executed directly, or dispatched from an argument list."""
def _UseSeparateMspdbsrv(self, env, args):
"""Allows to use a unique instance of mspdbsrv.exe per linker instead of a
def _UseSeparateMspdbsrv(self, env, args):
"""Allows to use a unique instance of mspdbsrv.exe per linker instead of a
shared one."""
if len(args) < 1:
raise Exception("Not enough arguments")
if len(args) < 1:
raise Exception("Not enough arguments")
if args[0] != 'link.exe':
return
if args[0] != "link.exe":
return
# Use the output filename passed to the linker to generate an endpoint name
# for mspdbsrv.exe.
endpoint_name = None
for arg in args:
m = _LINK_EXE_OUT_ARG.match(arg)
if m:
endpoint_name = re.sub(r'\W+', '',
'%s_%d' % (m.group('out'), os.getpid()))
break
# Use the output filename passed to the linker to generate an endpoint name
# for mspdbsrv.exe.
endpoint_name = None
for arg in args:
m = _LINK_EXE_OUT_ARG.match(arg)
if m:
endpoint_name = re.sub(
r"\W+", "", "%s_%d" % (m.group("out"), os.getpid())
)
break
if endpoint_name is None:
return
if endpoint_name is None:
return
# Adds the appropriate environment variable. This will be read by link.exe
# to know which instance of mspdbsrv.exe it should connect to (if it's
# not set then the default endpoint is used).
env['_MSPDBSRV_ENDPOINT_'] = endpoint_name
# Adds the appropriate environment variable. This will be read by link.exe
# to know which instance of mspdbsrv.exe it should connect to (if it's
# not set then the default endpoint is used).
env["_MSPDBSRV_ENDPOINT_"] = endpoint_name
def Dispatch(self, args):
"""Dispatches a string command to a method."""
if len(args) < 1:
raise Exception("Not enough arguments")
def Dispatch(self, args):
"""Dispatches a string command to a method."""
if len(args) < 1:
raise Exception("Not enough arguments")
method = "Exec%s" % self._CommandifyName(args[0])
return getattr(self, method)(*args[1:])
method = "Exec%s" % self._CommandifyName(args[0])
return getattr(self, method)(*args[1:])
def _CommandifyName(self, name_string):
"""Transforms a tool name like recursive-mirror to RecursiveMirror."""
return name_string.title().replace('-', '')
def _CommandifyName(self, name_string):
"""Transforms a tool name like recursive-mirror to RecursiveMirror."""
return name_string.title().replace("-", "")
def _GetEnv(self, arch):
"""Gets the saved environment from a file for a given architecture."""
# The environment is saved as an "environment block" (see CreateProcess
# and msvs_emulation for details). We convert to a dict here.
# Drop last 2 NULs, one for list terminator, one for trailing vs. separator.
pairs = open(arch).read()[:-2].split('\0')
kvs = [item.split('=', 1) for item in pairs]
return dict(kvs)
def _GetEnv(self, arch):
"""Gets the saved environment from a file for a given architecture."""
# The environment is saved as an "environment block" (see CreateProcess
# and msvs_emulation for details). We convert to a dict here.
# Drop last 2 NULs, one for list terminator, one for trailing vs. separator.
pairs = open(arch).read()[:-2].split("\0")
kvs = [item.split("=", 1) for item in pairs]
return dict(kvs)
def ExecStamp(self, path):
"""Simple stamp command."""
open(path, 'w').close()
def ExecStamp(self, path):
"""Simple stamp command."""
open(path, "w").close()
def ExecRecursiveMirror(self, source, dest):
"""Emulation of rm -rf out && cp -af in out."""
if os.path.exists(dest):
if os.path.isdir(dest):
def _on_error(fn, path, excinfo):
# The operation failed, possibly because the file is set to
# read-only. If that's why, make it writable and try the op again.
if not os.access(path, os.W_OK):
os.chmod(path, stat.S_IWRITE)
fn(path)
shutil.rmtree(dest, onerror=_on_error)
else:
if not os.access(dest, os.W_OK):
# Attempt to make the file writable before deleting it.
os.chmod(dest, stat.S_IWRITE)
os.unlink(dest)
def ExecRecursiveMirror(self, source, dest):
"""Emulation of rm -rf out && cp -af in out."""
if os.path.exists(dest):
if os.path.isdir(dest):
if os.path.isdir(source):
shutil.copytree(source, dest)
else:
shutil.copy2(source, dest)
def _on_error(fn, path, excinfo):
# The operation failed, possibly because the file is set to
# read-only. If that's why, make it writable and try the op again.
if not os.access(path, os.W_OK):
os.chmod(path, stat.S_IWRITE)
fn(path)
def ExecLinkWrapper(self, arch, use_separate_mspdbsrv, *args):
"""Filter diagnostic output from link that looks like:
shutil.rmtree(dest, onerror=_on_error)
else:
if not os.access(dest, os.W_OK):
# Attempt to make the file writable before deleting it.
os.chmod(dest, stat.S_IWRITE)
os.unlink(dest)
if os.path.isdir(source):
shutil.copytree(source, dest)
else:
shutil.copy2(source, dest)
def ExecLinkWrapper(self, arch, use_separate_mspdbsrv, *args):
"""Filter diagnostic output from link that looks like:
' Creating library ui.dll.lib and object ui.dll.exp'
This happens when there are exports from the dll or exe.
"""
env = self._GetEnv(arch)
if use_separate_mspdbsrv == 'True':
self._UseSeparateMspdbsrv(env, args)
if sys.platform == 'win32':
args = list(args) # *args is a tuple by default, which is read-only.
args[0] = args[0].replace('/', '\\')
# https://docs.python.org/2/library/subprocess.html:
# "On Unix with shell=True [...] if args is a sequence, the first item
# specifies the command string, and any additional items will be treated as
# additional arguments to the shell itself. That is to say, Popen does the
# equivalent of:
# Popen(['/bin/sh', '-c', args[0], args[1], ...])"
# For that reason, since going through the shell doesn't seem necessary on
# non-Windows don't do that there.
link = subprocess.Popen(args, shell=sys.platform == 'win32', env=env,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
out, _ = link.communicate()
if PY3:
out = out.decode('utf-8')
for line in out.splitlines():
if (not line.startswith(' Creating library ') and
not line.startswith('Generating code') and
not line.startswith('Finished generating code')):
print(line)
return link.returncode
env = self._GetEnv(arch)
if use_separate_mspdbsrv == "True":
self._UseSeparateMspdbsrv(env, args)
if sys.platform == "win32":
args = list(args) # *args is a tuple by default, which is read-only.
args[0] = args[0].replace("/", "\\")
# https://docs.python.org/2/library/subprocess.html:
# "On Unix with shell=True [...] if args is a sequence, the first item
# specifies the command string, and any additional items will be treated as
# additional arguments to the shell itself. That is to say, Popen does the
# equivalent of:
# Popen(['/bin/sh', '-c', args[0], args[1], ...])"
# For that reason, since going through the shell doesn't seem necessary on
# non-Windows don't do that there.
link = subprocess.Popen(
args,
shell=sys.platform == "win32",
env=env,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
)
out, _ = link.communicate()
if PY3:
out = out.decode("utf-8")
for line in out.splitlines():
if (
not line.startswith(" Creating library ")
and not line.startswith("Generating code")
and not line.startswith("Finished generating code")
):
print(line)
return link.returncode
def ExecLinkWithManifests(self, arch, embed_manifest, out, ldcmd, resname,
mt, rc, intermediate_manifest, *manifests):
"""A wrapper for handling creating a manifest resource and then executing
def ExecLinkWithManifests(
self,
arch,
embed_manifest,
out,
ldcmd,
resname,
mt,
rc,
intermediate_manifest,
*manifests
):
"""A wrapper for handling creating a manifest resource and then executing
a link command."""
# The 'normal' way to do manifests is to have link generate a manifest
# based on gathering dependencies from the object files, then merge that
# manifest with other manifests supplied as sources, convert the merged
# manifest to a resource, and then *relink*, including the compiled
# version of the manifest resource. This breaks incremental linking, and
# is generally overly complicated. Instead, we merge all the manifests
# provided (along with one that includes what would normally be in the
# linker-generated one, see msvs_emulation.py), and include that into the
# first and only link. We still tell link to generate a manifest, but we
# only use that to assert that our simpler process did not miss anything.
variables = {
'python': sys.executable,
'arch': arch,
'out': out,
'ldcmd': ldcmd,
'resname': resname,
'mt': mt,
'rc': rc,
'intermediate_manifest': intermediate_manifest,
'manifests': ' '.join(manifests),
}
add_to_ld = ''
if manifests:
subprocess.check_call(
'%(python)s gyp-win-tool manifest-wrapper %(arch)s %(mt)s -nologo '
'-manifest %(manifests)s -out:%(out)s.manifest' % variables)
if embed_manifest == 'True':
subprocess.check_call(
'%(python)s gyp-win-tool manifest-to-rc %(arch)s %(out)s.manifest'
' %(out)s.manifest.rc %(resname)s' % variables)
subprocess.check_call(
'%(python)s gyp-win-tool rc-wrapper %(arch)s %(rc)s '
'%(out)s.manifest.rc' % variables)
add_to_ld = ' %(out)s.manifest.res' % variables
subprocess.check_call(ldcmd + add_to_ld)
# The 'normal' way to do manifests is to have link generate a manifest
# based on gathering dependencies from the object files, then merge that
# manifest with other manifests supplied as sources, convert the merged
# manifest to a resource, and then *relink*, including the compiled
# version of the manifest resource. This breaks incremental linking, and
# is generally overly complicated. Instead, we merge all the manifests
# provided (along with one that includes what would normally be in the
# linker-generated one, see msvs_emulation.py), and include that into the
# first and only link. We still tell link to generate a manifest, but we
# only use that to assert that our simpler process did not miss anything.
variables = {
"python": sys.executable,
"arch": arch,
"out": out,
"ldcmd": ldcmd,
"resname": resname,
"mt": mt,
"rc": rc,
"intermediate_manifest": intermediate_manifest,
"manifests": " ".join(manifests),
}
add_to_ld = ""
if manifests:
subprocess.check_call(
"%(python)s gyp-win-tool manifest-wrapper %(arch)s %(mt)s -nologo "
"-manifest %(manifests)s -out:%(out)s.manifest" % variables
)
if embed_manifest == "True":
subprocess.check_call(
"%(python)s gyp-win-tool manifest-to-rc %(arch)s %(out)s.manifest"
" %(out)s.manifest.rc %(resname)s" % variables
)
subprocess.check_call(
"%(python)s gyp-win-tool rc-wrapper %(arch)s %(rc)s "
"%(out)s.manifest.rc" % variables
)
add_to_ld = " %(out)s.manifest.res" % variables
subprocess.check_call(ldcmd + add_to_ld)
# Run mt.exe on the theoretically complete manifest we generated, merging
# it with the one the linker generated to confirm that the linker
# generated one does not add anything. This is strictly unnecessary for
# correctness, it's only to verify that e.g. /MANIFESTDEPENDENCY was not
# used in a #pragma comment.
if manifests:
# Merge the intermediate one with ours to .assert.manifest, then check
# that .assert.manifest is identical to ours.
subprocess.check_call(
'%(python)s gyp-win-tool manifest-wrapper %(arch)s %(mt)s -nologo '
'-manifest %(out)s.manifest %(intermediate_manifest)s '
'-out:%(out)s.assert.manifest' % variables)
assert_manifest = '%(out)s.assert.manifest' % variables
our_manifest = '%(out)s.manifest' % variables
# Load and normalize the manifests. mt.exe sometimes removes whitespace,
# and sometimes doesn't unfortunately.
with open(our_manifest, 'rb') as our_f:
with open(assert_manifest, 'rb') as assert_f:
our_data = our_f.read().translate(None, string.whitespace)
assert_data = assert_f.read().translate(None, string.whitespace)
if our_data != assert_data:
os.unlink(out)
def dump(filename):
sys.stderr.write('%s\n-----\n' % filename)
with open(filename, 'rb') as f:
sys.stderr.write(f.read() + '\n-----\n')
dump(intermediate_manifest)
dump(our_manifest)
dump(assert_manifest)
sys.stderr.write(
'Linker generated manifest "%s" added to final manifest "%s" '
'(result in "%s"). '
'Were /MANIFEST switches used in #pragma statements? ' % (
intermediate_manifest, our_manifest, assert_manifest))
return 1
# Run mt.exe on the theoretically complete manifest we generated, merging
# it with the one the linker generated to confirm that the linker
# generated one does not add anything. This is strictly unnecessary for
# correctness, it's only to verify that e.g. /MANIFESTDEPENDENCY was not
# used in a #pragma comment.
if manifests:
# Merge the intermediate one with ours to .assert.manifest, then check
# that .assert.manifest is identical to ours.
subprocess.check_call(
"%(python)s gyp-win-tool manifest-wrapper %(arch)s %(mt)s -nologo "
"-manifest %(out)s.manifest %(intermediate_manifest)s "
"-out:%(out)s.assert.manifest" % variables
)
assert_manifest = "%(out)s.assert.manifest" % variables
our_manifest = "%(out)s.manifest" % variables
# Load and normalize the manifests. mt.exe sometimes removes whitespace,
# and sometimes doesn't unfortunately.
with open(our_manifest, "r") as our_f:
with open(assert_manifest, "r") as assert_f:
our_data = our_f.read().translate(None, string.whitespace)
assert_data = assert_f.read().translate(None, string.whitespace)
if our_data != assert_data:
os.unlink(out)
def ExecManifestWrapper(self, arch, *args):
"""Run manifest tool with environment set. Strip out undesirable warning
def dump(filename):
print(filename, file=sys.stderr)
print("-----", file=sys.stderr)
with open(filename, "r") as f:
print(f.read(), file=sys.stderr)
print("-----", file=sys.stderr)
dump(intermediate_manifest)
dump(our_manifest)
dump(assert_manifest)
sys.stderr.write(
'Linker generated manifest "%s" added to final manifest "%s" '
'(result in "%s"). '
"Were /MANIFEST switches used in #pragma statements? "
% (intermediate_manifest, our_manifest, assert_manifest)
)
return 1
def ExecManifestWrapper(self, arch, *args):
"""Run manifest tool with environment set. Strip out undesirable warning
(some XML blocks are recognized by the OS loader, but not the manifest
tool)."""
env = self._GetEnv(arch)
popen = subprocess.Popen(args, shell=True, env=env,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
out, _ = popen.communicate()
if PY3:
out = out.decode('utf-8')
for line in out.splitlines():
if line and 'manifest authoring warning 81010002' not in line:
print(line)
return popen.returncode
env = self._GetEnv(arch)
popen = subprocess.Popen(
args, shell=True, env=env, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
out, _ = popen.communicate()
if PY3:
out = out.decode("utf-8")
for line in out.splitlines():
if line and "manifest authoring warning 81010002" not in line:
print(line)
return popen.returncode
def ExecManifestToRc(self, arch, *args):
"""Creates a resource file pointing a SxS assembly manifest.
def ExecManifestToRc(self, arch, *args):
"""Creates a resource file pointing a SxS assembly manifest.
|args| is tuple containing path to resource file, path to manifest file
and resource name which can be "1" (for executables) or "2" (for DLLs)."""
manifest_path, resource_path, resource_name = args
with open(resource_path, 'wb') as output:
output.write('#include <windows.h>\n%s RT_MANIFEST "%s"' % (
resource_name,
os.path.abspath(manifest_path).replace('\\', '/')))
manifest_path, resource_path, resource_name = args
with open(resource_path, "w") as output:
output.write(
'#include <windows.h>\n%s RT_MANIFEST "%s"'
% (resource_name, os.path.abspath(manifest_path).replace("\\", "/"))
)
def ExecMidlWrapper(self, arch, outdir, tlb, h, dlldata, iid, proxy, idl,
*flags):
"""Filter noisy filenames output from MIDL compile step that isn't
def ExecMidlWrapper(self, arch, outdir, tlb, h, dlldata, iid, proxy, idl, *flags):
"""Filter noisy filenames output from MIDL compile step that isn't
quietable via command line flags.
"""
args = ['midl', '/nologo'] + list(flags) + [
'/out', outdir,
'/tlb', tlb,
'/h', h,
'/dlldata', dlldata,
'/iid', iid,
'/proxy', proxy,
idl]
env = self._GetEnv(arch)
popen = subprocess.Popen(args, shell=True, env=env,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
out, _ = popen.communicate()
if PY3:
out = out.decode('utf-8')
# Filter junk out of stdout, and write filtered versions. Output we want
# to filter is pairs of lines that look like this:
# Processing C:\Program Files (x86)\Microsoft SDKs\...\include\objidl.idl
# objidl.idl
lines = out.splitlines()
prefixes = ('Processing ', '64 bit Processing ')
processing = set(os.path.basename(x)
for x in lines if x.startswith(prefixes))
for line in lines:
if not line.startswith(prefixes) and line not in processing:
print(line)
return popen.returncode
args = (
["midl", "/nologo"]
+ list(flags)
+ [
"/out",
outdir,
"/tlb",
tlb,
"/h",
h,
"/dlldata",
dlldata,
"/iid",
iid,
"/proxy",
proxy,
idl,
]
)
env = self._GetEnv(arch)
popen = subprocess.Popen(
args, shell=True, env=env, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
out, _ = popen.communicate()
if PY3:
out = out.decode("utf-8")
# Filter junk out of stdout, and write filtered versions. Output we want
# to filter is pairs of lines that look like this:
# Processing C:\Program Files (x86)\Microsoft SDKs\...\include\objidl.idl
# objidl.idl
lines = out.splitlines()
prefixes = ("Processing ", "64 bit Processing ")
processing = set(os.path.basename(x) for x in lines if x.startswith(prefixes))
for line in lines:
if not line.startswith(prefixes) and line not in processing:
print(line)
return popen.returncode
def ExecAsmWrapper(self, arch, *args):
"""Filter logo banner from invocations of asm.exe."""
env = self._GetEnv(arch)
popen = subprocess.Popen(args, shell=True, env=env,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
out, _ = popen.communicate()
if PY3:
out = out.decode('utf-8')
for line in out.splitlines():
if (not line.startswith('Copyright (C) Microsoft Corporation') and
not line.startswith('Microsoft (R) Macro Assembler') and
not line.startswith(' Assembling: ') and
line):
print(line)
return popen.returncode
def ExecAsmWrapper(self, arch, *args):
"""Filter logo banner from invocations of asm.exe."""
env = self._GetEnv(arch)
popen = subprocess.Popen(
args, shell=True, env=env, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
out, _ = popen.communicate()
if PY3:
out = out.decode("utf-8")
for line in out.splitlines():
if (
not line.startswith("Copyright (C) Microsoft Corporation")
and not line.startswith("Microsoft (R) Macro Assembler")
and not line.startswith(" Assembling: ")
and line
):
print(line)
return popen.returncode
def ExecRcWrapper(self, arch, *args):
"""Filter logo banner from invocations of rc.exe. Older versions of RC
def ExecRcWrapper(self, arch, *args):
"""Filter logo banner from invocations of rc.exe. Older versions of RC
don't support the /nologo flag."""
env = self._GetEnv(arch)
popen = subprocess.Popen(args, shell=True, env=env,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
out, _ = popen.communicate()
if PY3:
out = out.decode('utf-8')
for line in out.splitlines():
if (not line.startswith('Microsoft (R) Windows (R) Resource Compiler') and
not line.startswith('Copyright (C) Microsoft Corporation') and
line):
print(line)
return popen.returncode
env = self._GetEnv(arch)
popen = subprocess.Popen(
args, shell=True, env=env, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
out, _ = popen.communicate()
if PY3:
out = out.decode("utf-8")
for line in out.splitlines():
if (
not line.startswith("Microsoft (R) Windows (R) Resource Compiler")
and not line.startswith("Copyright (C) Microsoft Corporation")
and line
):
print(line)
return popen.returncode
def ExecActionWrapper(self, arch, rspfile, *dir):
"""Runs an action command line from a response file using the environment
def ExecActionWrapper(self, arch, rspfile, *dir):
"""Runs an action command line from a response file using the environment
for |arch|. If |dir| is supplied, use that as the working directory."""
env = self._GetEnv(arch)
# TODO(scottmg): This is a temporary hack to get some specific variables
# through to actions that are set after gyp-time. http://crbug.com/333738.
for k, v in os.environ.items():
if k not in env:
env[k] = v
args = open(rspfile).read()
dir = dir[0] if dir else None
return subprocess.call(args, shell=True, env=env, cwd=dir)
env = self._GetEnv(arch)
# TODO(scottmg): This is a temporary hack to get some specific variables
# through to actions that are set after gyp-time. http://crbug.com/333738.
for k, v in os.environ.items():
if k not in env:
env[k] = v
args = open(rspfile).read()
dir = dir[0] if dir else None
return subprocess.call(args, shell=True, env=env, cwd=dir)
def ExecClCompile(self, project_dir, selected_files):
"""Executed by msvs-ninja projects when the 'ClCompile' target is used to
def ExecClCompile(self, project_dir, selected_files):
"""Executed by msvs-ninja projects when the 'ClCompile' target is used to
build selected C/C++ files."""
project_dir = os.path.relpath(project_dir, BASE_DIR)
selected_files = selected_files.split(';')
ninja_targets = [os.path.join(project_dir, filename) + '^^'
for filename in selected_files]
cmd = ['ninja.exe']
cmd.extend(ninja_targets)
return subprocess.call(cmd, shell=True, cwd=BASE_DIR)
project_dir = os.path.relpath(project_dir, BASE_DIR)
selected_files = selected_files.split(";")
ninja_targets = [
os.path.join(project_dir, filename) + "^^" for filename in selected_files
]
cmd = ["ninja.exe"]
cmd.extend(ninja_targets)
return subprocess.call(cmd, shell=True, cwd=BASE_DIR)
if __name__ == '__main__':
sys.exit(main(sys.argv[1:]))
if __name__ == "__main__":
sys.exit(main(sys.argv[1:]))

File diff suppressed because it is too large Load diff

View file

@ -20,116 +20,122 @@ import xml.sax.saxutils
def _WriteWorkspace(main_gyp, sources_gyp, params):
""" Create a workspace to wrap main and sources gyp paths. """
(build_file_root, build_file_ext) = os.path.splitext(main_gyp)
workspace_path = build_file_root + '.xcworkspace'
options = params['options']
if options.generator_output:
workspace_path = os.path.join(options.generator_output, workspace_path)
try:
os.makedirs(workspace_path)
except OSError as e:
if e.errno != errno.EEXIST:
raise
output_string = '<?xml version="1.0" encoding="UTF-8"?>\n' + \
'<Workspace version = "1.0">\n'
for gyp_name in [main_gyp, sources_gyp]:
name = os.path.splitext(os.path.basename(gyp_name))[0] + '.xcodeproj'
name = xml.sax.saxutils.quoteattr("group:" + name)
output_string += ' <FileRef location = %s></FileRef>\n' % name
output_string += '</Workspace>\n'
""" Create a workspace to wrap main and sources gyp paths. """
(build_file_root, build_file_ext) = os.path.splitext(main_gyp)
workspace_path = build_file_root + ".xcworkspace"
options = params["options"]
if options.generator_output:
workspace_path = os.path.join(options.generator_output, workspace_path)
try:
os.makedirs(workspace_path)
except OSError as e:
if e.errno != errno.EEXIST:
raise
output_string = (
'<?xml version="1.0" encoding="UTF-8"?>\n' + '<Workspace version = "1.0">\n'
)
for gyp_name in [main_gyp, sources_gyp]:
name = os.path.splitext(os.path.basename(gyp_name))[0] + ".xcodeproj"
name = xml.sax.saxutils.quoteattr("group:" + name)
output_string += " <FileRef location = %s></FileRef>\n" % name
output_string += "</Workspace>\n"
workspace_file = os.path.join(workspace_path, "contents.xcworkspacedata")
workspace_file = os.path.join(workspace_path, "contents.xcworkspacedata")
try:
with open(workspace_file, 'r') as input_file:
input_string = input_file.read()
if input_string == output_string:
return
except IOError:
# Ignore errors if the file doesn't exist.
pass
try:
with open(workspace_file, "r") as input_file:
input_string = input_file.read()
if input_string == output_string:
return
except IOError:
# Ignore errors if the file doesn't exist.
pass
with open(workspace_file, "w") as output_file:
output_file.write(output_string)
with open(workspace_file, 'w') as output_file:
output_file.write(output_string)
def _TargetFromSpec(old_spec, params):
""" Create fake target for xcode-ninja wrapper. """
# Determine ninja top level build dir (e.g. /path/to/out).
ninja_toplevel = None
jobs = 0
if params:
options = params['options']
ninja_toplevel = \
os.path.join(options.toplevel_dir,
gyp.generator.ninja.ComputeOutputDir(params))
jobs = params.get('generator_flags', {}).get('xcode_ninja_jobs', 0)
""" Create fake target for xcode-ninja wrapper. """
# Determine ninja top level build dir (e.g. /path/to/out).
ninja_toplevel = None
jobs = 0
if params:
options = params["options"]
ninja_toplevel = os.path.join(
options.toplevel_dir, gyp.generator.ninja.ComputeOutputDir(params)
)
jobs = params.get("generator_flags", {}).get("xcode_ninja_jobs", 0)
target_name = old_spec.get('target_name')
product_name = old_spec.get('product_name', target_name)
product_extension = old_spec.get('product_extension')
target_name = old_spec.get("target_name")
product_name = old_spec.get("product_name", target_name)
product_extension = old_spec.get("product_extension")
ninja_target = {}
ninja_target['target_name'] = target_name
ninja_target['product_name'] = product_name
if product_extension:
ninja_target['product_extension'] = product_extension
ninja_target['toolset'] = old_spec.get('toolset')
ninja_target['default_configuration'] = old_spec.get('default_configuration')
ninja_target['configurations'] = {}
ninja_target = {}
ninja_target["target_name"] = target_name
ninja_target["product_name"] = product_name
if product_extension:
ninja_target["product_extension"] = product_extension
ninja_target["toolset"] = old_spec.get("toolset")
ninja_target["default_configuration"] = old_spec.get("default_configuration")
ninja_target["configurations"] = {}
# Tell Xcode to look in |ninja_toplevel| for build products.
new_xcode_settings = {}
if ninja_toplevel:
new_xcode_settings['CONFIGURATION_BUILD_DIR'] = \
"%s/$(CONFIGURATION)$(EFFECTIVE_PLATFORM_NAME)" % ninja_toplevel
# Tell Xcode to look in |ninja_toplevel| for build products.
new_xcode_settings = {}
if ninja_toplevel:
new_xcode_settings["CONFIGURATION_BUILD_DIR"] = (
"%s/$(CONFIGURATION)$(EFFECTIVE_PLATFORM_NAME)" % ninja_toplevel
)
if 'configurations' in old_spec:
for config in old_spec['configurations']:
old_xcode_settings = \
old_spec['configurations'][config].get('xcode_settings', {})
if 'IPHONEOS_DEPLOYMENT_TARGET' in old_xcode_settings:
new_xcode_settings['CODE_SIGNING_REQUIRED'] = "NO"
new_xcode_settings['IPHONEOS_DEPLOYMENT_TARGET'] = \
old_xcode_settings['IPHONEOS_DEPLOYMENT_TARGET']
for key in ['BUNDLE_LOADER', 'TEST_HOST']:
if key in old_xcode_settings:
new_xcode_settings[key] = old_xcode_settings[key]
if "configurations" in old_spec:
for config in old_spec["configurations"]:
old_xcode_settings = old_spec["configurations"][config].get(
"xcode_settings", {}
)
if "IPHONEOS_DEPLOYMENT_TARGET" in old_xcode_settings:
new_xcode_settings["CODE_SIGNING_REQUIRED"] = "NO"
new_xcode_settings["IPHONEOS_DEPLOYMENT_TARGET"] = old_xcode_settings[
"IPHONEOS_DEPLOYMENT_TARGET"
]
for key in ["BUNDLE_LOADER", "TEST_HOST"]:
if key in old_xcode_settings:
new_xcode_settings[key] = old_xcode_settings[key]
ninja_target['configurations'][config] = {}
ninja_target['configurations'][config]['xcode_settings'] = \
new_xcode_settings
ninja_target["configurations"][config] = {}
ninja_target["configurations"][config][
"xcode_settings"
] = new_xcode_settings
ninja_target["mac_bundle"] = old_spec.get("mac_bundle", 0)
ninja_target["mac_xctest_bundle"] = old_spec.get("mac_xctest_bundle", 0)
ninja_target["ios_app_extension"] = old_spec.get("ios_app_extension", 0)
ninja_target["ios_watchkit_extension"] = old_spec.get("ios_watchkit_extension", 0)
ninja_target["ios_watchkit_app"] = old_spec.get("ios_watchkit_app", 0)
ninja_target["type"] = old_spec["type"]
if ninja_toplevel:
ninja_target["actions"] = [
{
"action_name": "Compile and copy %s via ninja" % target_name,
"inputs": [],
"outputs": [],
"action": [
"env",
"PATH=%s" % os.environ["PATH"],
"ninja",
"-C",
new_xcode_settings["CONFIGURATION_BUILD_DIR"],
target_name,
],
"message": "Compile and copy %s via ninja" % target_name,
},
]
if jobs > 0:
ninja_target["actions"][0]["action"].extend(("-j", jobs))
return ninja_target
ninja_target['mac_bundle'] = old_spec.get('mac_bundle', 0)
ninja_target['mac_xctest_bundle'] = old_spec.get('mac_xctest_bundle', 0)
ninja_target['ios_app_extension'] = old_spec.get('ios_app_extension', 0)
ninja_target['ios_watchkit_extension'] = \
old_spec.get('ios_watchkit_extension', 0)
ninja_target['ios_watchkit_app'] = old_spec.get('ios_watchkit_app', 0)
ninja_target['type'] = old_spec['type']
if ninja_toplevel:
ninja_target['actions'] = [
{
'action_name': 'Compile and copy %s via ninja' % target_name,
'inputs': [],
'outputs': [],
'action': [
'env',
'PATH=%s' % os.environ['PATH'],
'ninja',
'-C',
new_xcode_settings['CONFIGURATION_BUILD_DIR'],
target_name,
],
'message': 'Compile and copy %s via ninja' % target_name,
},
]
if jobs > 0:
ninja_target['actions'][0]['action'].extend(('-j', jobs))
return ninja_target
def IsValidTargetForWrapper(target_extras, executable_target_pattern, spec):
"""Limit targets for Xcode wrapper.
"""Limit targets for Xcode wrapper.
Xcode sometimes performs poorly with too many targets, so only include
proper executable targets, with filters to customize.
@ -138,25 +144,27 @@ def IsValidTargetForWrapper(target_extras, executable_target_pattern, spec):
executable_target_pattern: Regular expression limiting executable targets.
spec: Specifications for target.
"""
target_name = spec.get('target_name')
# Always include targets matching target_extras.
if target_extras is not None and re.search(target_extras, target_name):
return True
target_name = spec.get("target_name")
# Always include targets matching target_extras.
if target_extras is not None and re.search(target_extras, target_name):
return True
# Otherwise just show executable targets and xc_tests.
if (int(spec.get('mac_xctest_bundle', 0)) != 0 or
(spec.get('type', '') == 'executable' and
spec.get('product_extension', '') != 'bundle')):
# Otherwise just show executable targets and xc_tests.
if int(spec.get("mac_xctest_bundle", 0)) != 0 or (
spec.get("type", "") == "executable"
and spec.get("product_extension", "") != "bundle"
):
# If there is a filter and the target does not match, exclude the target.
if executable_target_pattern is not None:
if not re.search(executable_target_pattern, target_name):
return False
return True
return False
# If there is a filter and the target does not match, exclude the target.
if executable_target_pattern is not None:
if not re.search(executable_target_pattern, target_name):
return False
return True
return False
def CreateWrapper(target_list, target_dicts, data, params):
"""Initialize targets for the ninja wrapper.
"""Initialize targets for the ninja wrapper.
This sets up the necessary variables in the targets to generate Xcode projects
that use ninja as an external builder.
@ -166,124 +174,129 @@ def CreateWrapper(target_list, target_dicts, data, params):
data: Dict of flattened build files keyed on gyp path.
params: Dict of global options for gyp.
"""
orig_gyp = params['build_files'][0]
for gyp_name, gyp_dict in data.items():
if gyp_name == orig_gyp:
depth = gyp_dict['_DEPTH']
orig_gyp = params["build_files"][0]
for gyp_name, gyp_dict in data.items():
if gyp_name == orig_gyp:
depth = gyp_dict["_DEPTH"]
# Check for custom main gyp name, otherwise use the default CHROMIUM_GYP_FILE
# and prepend .ninja before the .gyp extension.
generator_flags = params.get('generator_flags', {})
main_gyp = generator_flags.get('xcode_ninja_main_gyp', None)
if main_gyp is None:
(build_file_root, build_file_ext) = os.path.splitext(orig_gyp)
main_gyp = build_file_root + ".ninja" + build_file_ext
# Check for custom main gyp name, otherwise use the default CHROMIUM_GYP_FILE
# and prepend .ninja before the .gyp extension.
generator_flags = params.get("generator_flags", {})
main_gyp = generator_flags.get("xcode_ninja_main_gyp", None)
if main_gyp is None:
(build_file_root, build_file_ext) = os.path.splitext(orig_gyp)
main_gyp = build_file_root + ".ninja" + build_file_ext
# Create new |target_list|, |target_dicts| and |data| data structures.
new_target_list = []
new_target_dicts = {}
new_data = {}
# Create new |target_list|, |target_dicts| and |data| data structures.
new_target_list = []
new_target_dicts = {}
new_data = {}
# Set base keys needed for |data|.
new_data[main_gyp] = {}
new_data[main_gyp]['included_files'] = []
new_data[main_gyp]['targets'] = []
new_data[main_gyp]['xcode_settings'] = \
data[orig_gyp].get('xcode_settings', {})
# Set base keys needed for |data|.
new_data[main_gyp] = {}
new_data[main_gyp]["included_files"] = []
new_data[main_gyp]["targets"] = []
new_data[main_gyp]["xcode_settings"] = data[orig_gyp].get("xcode_settings", {})
# Normally the xcode-ninja generator includes only valid executable targets.
# If |xcode_ninja_executable_target_pattern| is set, that list is reduced to
# executable targets that match the pattern. (Default all)
executable_target_pattern = \
generator_flags.get('xcode_ninja_executable_target_pattern', None)
# Normally the xcode-ninja generator includes only valid executable targets.
# If |xcode_ninja_executable_target_pattern| is set, that list is reduced to
# executable targets that match the pattern. (Default all)
executable_target_pattern = generator_flags.get(
"xcode_ninja_executable_target_pattern", None
)
# For including other non-executable targets, add the matching target name
# to the |xcode_ninja_target_pattern| regular expression. (Default none)
target_extras = generator_flags.get('xcode_ninja_target_pattern', None)
# For including other non-executable targets, add the matching target name
# to the |xcode_ninja_target_pattern| regular expression. (Default none)
target_extras = generator_flags.get("xcode_ninja_target_pattern", None)
for old_qualified_target in target_list:
spec = target_dicts[old_qualified_target]
if IsValidTargetForWrapper(target_extras, executable_target_pattern, spec):
# Add to new_target_list.
target_name = spec.get('target_name')
new_target_name = '%s:%s#target' % (main_gyp, target_name)
new_target_list.append(new_target_name)
for old_qualified_target in target_list:
spec = target_dicts[old_qualified_target]
if IsValidTargetForWrapper(target_extras, executable_target_pattern, spec):
# Add to new_target_list.
target_name = spec.get("target_name")
new_target_name = "%s:%s#target" % (main_gyp, target_name)
new_target_list.append(new_target_name)
# Add to new_target_dicts.
new_target_dicts[new_target_name] = _TargetFromSpec(spec, params)
# Add to new_target_dicts.
new_target_dicts[new_target_name] = _TargetFromSpec(spec, params)
# Add to new_data.
for old_target in data[old_qualified_target.split(':')[0]]['targets']:
if old_target['target_name'] == target_name:
new_data_target = {}
new_data_target['target_name'] = old_target['target_name']
new_data_target['toolset'] = old_target['toolset']
new_data[main_gyp]['targets'].append(new_data_target)
# Add to new_data.
for old_target in data[old_qualified_target.split(":")[0]]["targets"]:
if old_target["target_name"] == target_name:
new_data_target = {}
new_data_target["target_name"] = old_target["target_name"]
new_data_target["toolset"] = old_target["toolset"]
new_data[main_gyp]["targets"].append(new_data_target)
# Create sources target.
sources_target_name = 'sources_for_indexing'
sources_target = _TargetFromSpec(
{ 'target_name' : sources_target_name,
'toolset': 'target',
'default_configuration': 'Default',
'mac_bundle': '0',
'type': 'executable'
}, None)
# Create sources target.
sources_target_name = "sources_for_indexing"
sources_target = _TargetFromSpec(
{
"target_name": sources_target_name,
"toolset": "target",
"default_configuration": "Default",
"mac_bundle": "0",
"type": "executable",
},
None,
)
# Tell Xcode to look everywhere for headers.
sources_target['configurations'] = {'Default': { 'include_dirs': [ depth ] } }
# Tell Xcode to look everywhere for headers.
sources_target["configurations"] = {"Default": {"include_dirs": [depth]}}
# Put excluded files into the sources target so they can be opened in Xcode.
skip_excluded_files = \
not generator_flags.get('xcode_ninja_list_excluded_files', True)
# Put excluded files into the sources target so they can be opened in Xcode.
skip_excluded_files = not generator_flags.get(
"xcode_ninja_list_excluded_files", True
)
sources = []
for target, target_dict in target_dicts.items():
base = os.path.dirname(target)
files = target_dict.get('sources', []) + \
target_dict.get('mac_bundle_resources', [])
sources = []
for target, target_dict in target_dicts.items():
base = os.path.dirname(target)
files = target_dict.get("sources", []) + target_dict.get(
"mac_bundle_resources", []
)
if not skip_excluded_files:
files.extend(target_dict.get('sources_excluded', []) +
target_dict.get('mac_bundle_resources_excluded', []))
if not skip_excluded_files:
files.extend(
target_dict.get("sources_excluded", [])
+ target_dict.get("mac_bundle_resources_excluded", [])
)
for action in target_dict.get('actions', []):
files.extend(action.get('inputs', []))
for action in target_dict.get("actions", []):
files.extend(action.get("inputs", []))
if not skip_excluded_files:
files.extend(action.get('inputs_excluded', []))
if not skip_excluded_files:
files.extend(action.get("inputs_excluded", []))
# Remove files starting with $. These are mostly intermediate files for the
# build system.
files = [ file for file in files if not file.startswith('$')]
# Remove files starting with $. These are mostly intermediate files for the
# build system.
files = [file for file in files if not file.startswith("$")]
# Make sources relative to root build file.
relative_path = os.path.dirname(main_gyp)
sources += [ os.path.relpath(os.path.join(base, file), relative_path)
for file in files ]
# Make sources relative to root build file.
relative_path = os.path.dirname(main_gyp)
sources += [
os.path.relpath(os.path.join(base, file), relative_path) for file in files
]
sources_target['sources'] = sorted(set(sources))
sources_target["sources"] = sorted(set(sources))
# Put sources_to_index in it's own gyp.
sources_gyp = \
os.path.join(os.path.dirname(main_gyp), sources_target_name + ".gyp")
fully_qualified_target_name = \
'%s:%s#target' % (sources_gyp, sources_target_name)
# Put sources_to_index in it's own gyp.
sources_gyp = os.path.join(os.path.dirname(main_gyp), sources_target_name + ".gyp")
fully_qualified_target_name = "%s:%s#target" % (sources_gyp, sources_target_name)
# Add to new_target_list, new_target_dicts and new_data.
new_target_list.append(fully_qualified_target_name)
new_target_dicts[fully_qualified_target_name] = sources_target
new_data_target = {}
new_data_target['target_name'] = sources_target['target_name']
new_data_target['_DEPTH'] = depth
new_data_target['toolset'] = "target"
new_data[sources_gyp] = {}
new_data[sources_gyp]['targets'] = []
new_data[sources_gyp]['included_files'] = []
new_data[sources_gyp]['xcode_settings'] = \
data[orig_gyp].get('xcode_settings', {})
new_data[sources_gyp]['targets'].append(new_data_target)
# Add to new_target_list, new_target_dicts and new_data.
new_target_list.append(fully_qualified_target_name)
new_target_dicts[fully_qualified_target_name] = sources_target
new_data_target = {}
new_data_target["target_name"] = sources_target["target_name"]
new_data_target["_DEPTH"] = depth
new_data_target["toolset"] = "target"
new_data[sources_gyp] = {}
new_data[sources_gyp]["targets"] = []
new_data[sources_gyp]["included_files"] = []
new_data[sources_gyp]["xcode_settings"] = data[orig_gyp].get("xcode_settings", {})
new_data[sources_gyp]["targets"].append(new_data_target)
# Write workspace to file.
_WriteWorkspace(main_gyp, sources_gyp, params)
return (new_target_list, new_target_dicts, new_data)
# Write workspace to file.
_WriteWorkspace(main_gyp, sources_gyp, params)
return (new_target_list, new_target_dicts, new_data)

File diff suppressed because it is too large Load diff

View file

@ -14,56 +14,52 @@ import xml.dom.minidom
def _Replacement_write_data(writer, data, is_attrib=False):
"""Writes datachars to writer."""
data = data.replace("&", "&amp;").replace("<", "&lt;")
data = data.replace("\"", "&quot;").replace(">", "&gt;")
if is_attrib:
data = data.replace(
"\r", "&#xD;").replace(
"\n", "&#xA;").replace(
"\t", "&#x9;")
writer.write(data)
"""Writes datachars to writer."""
data = data.replace("&", "&amp;").replace("<", "&lt;")
data = data.replace('"', "&quot;").replace(">", "&gt;")
if is_attrib:
data = data.replace("\r", "&#xD;").replace("\n", "&#xA;").replace("\t", "&#x9;")
writer.write(data)
def _Replacement_writexml(self, writer, indent="", addindent="", newl=""):
# indent = current indentation
# addindent = indentation to add to higher levels
# newl = newline string
writer.write(indent+"<" + self.tagName)
# indent = current indentation
# addindent = indentation to add to higher levels
# newl = newline string
writer.write(indent + "<" + self.tagName)
attrs = self._get_attributes()
a_names = attrs.keys()
a_names.sort()
attrs = self._get_attributes()
a_names = sorted(attrs.keys())
for a_name in a_names:
writer.write(" %s=\"" % a_name)
_Replacement_write_data(writer, attrs[a_name].value, is_attrib=True)
writer.write("\"")
if self.childNodes:
writer.write(">%s" % newl)
for node in self.childNodes:
node.writexml(writer, indent + addindent, addindent, newl)
writer.write("%s</%s>%s" % (indent, self.tagName, newl))
else:
writer.write("/>%s" % newl)
for a_name in a_names:
writer.write(' %s="' % a_name)
_Replacement_write_data(writer, attrs[a_name].value, is_attrib=True)
writer.write('"')
if self.childNodes:
writer.write(">%s" % newl)
for node in self.childNodes:
node.writexml(writer, indent + addindent, addindent, newl)
writer.write("%s</%s>%s" % (indent, self.tagName, newl))
else:
writer.write("/>%s" % newl)
class XmlFix(object):
"""Object to manage temporary patching of xml.dom.minidom."""
"""Object to manage temporary patching of xml.dom.minidom."""
def __init__(self):
# Preserve current xml.dom.minidom functions.
self.write_data = xml.dom.minidom._write_data
self.writexml = xml.dom.minidom.Element.writexml
# Inject replacement versions of a function and a method.
xml.dom.minidom._write_data = _Replacement_write_data
xml.dom.minidom.Element.writexml = _Replacement_writexml
def __init__(self):
# Preserve current xml.dom.minidom functions.
self.write_data = xml.dom.minidom._write_data
self.writexml = xml.dom.minidom.Element.writexml
# Inject replacement versions of a function and a method.
xml.dom.minidom._write_data = _Replacement_write_data
xml.dom.minidom.Element.writexml = _Replacement_writexml
def Cleanup(self):
if self.write_data:
xml.dom.minidom._write_data = self.write_data
xml.dom.minidom.Element.writexml = self.writexml
self.write_data = None
def Cleanup(self):
if self.write_data:
xml.dom.minidom._write_data = self.write_data
xml.dom.minidom.Element.writexml = self.writexml
self.write_data = None
def __del__(self):
self.Cleanup()
def __del__(self):
self.Cleanup()

2
gyp/requirements_dev.txt Normal file
View file

@ -0,0 +1,2 @@
flake8
pytest

View file

@ -1,81 +0,0 @@
#!/usr/bin/python
# Copyright (c) 2009 Google Inc. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import os.path
import shutil
import sys
gyps = [
'app/app.gyp',
'base/base.gyp',
'build/temp_gyp/googleurl.gyp',
'build/all.gyp',
'build/common.gypi',
'build/external_code.gypi',
'chrome/test/security_tests/security_tests.gyp',
'chrome/third_party/hunspell/hunspell.gyp',
'chrome/chrome.gyp',
'media/media.gyp',
'net/net.gyp',
'printing/printing.gyp',
'sdch/sdch.gyp',
'skia/skia.gyp',
'testing/gmock.gyp',
'testing/gtest.gyp',
'third_party/bzip2/bzip2.gyp',
'third_party/icu38/icu38.gyp',
'third_party/libevent/libevent.gyp',
'third_party/libjpeg/libjpeg.gyp',
'third_party/libpng/libpng.gyp',
'third_party/libxml/libxml.gyp',
'third_party/libxslt/libxslt.gyp',
'third_party/lzma_sdk/lzma_sdk.gyp',
'third_party/modp_b64/modp_b64.gyp',
'third_party/npapi/npapi.gyp',
'third_party/sqlite/sqlite.gyp',
'third_party/zlib/zlib.gyp',
'v8/tools/gyp/v8.gyp',
'webkit/activex_shim/activex_shim.gyp',
'webkit/activex_shim_dll/activex_shim_dll.gyp',
'webkit/build/action_csspropertynames.py',
'webkit/build/action_cssvaluekeywords.py',
'webkit/build/action_jsconfig.py',
'webkit/build/action_makenames.py',
'webkit/build/action_maketokenizer.py',
'webkit/build/action_useragentstylesheets.py',
'webkit/build/rule_binding.py',
'webkit/build/rule_bison.py',
'webkit/build/rule_gperf.py',
'webkit/tools/test_shell/test_shell.gyp',
'webkit/webkit.gyp',
]
def Main(argv):
if len(argv) != 3 or argv[1] not in ['push', 'pull']:
print 'Usage: %s push/pull PATH_TO_CHROME' % argv[0]
return 1
path_to_chrome = argv[2]
for g in gyps:
chrome_file = os.path.join(path_to_chrome, g)
local_file = os.path.join(os.path.dirname(argv[0]), os.path.split(g)[1])
if argv[1] == 'push':
print 'Copying %s to %s' % (local_file, chrome_file)
shutil.copyfile(local_file, chrome_file)
elif argv[1] == 'pull':
print 'Copying %s to %s' % (chrome_file, local_file)
shutil.copyfile(chrome_file, local_file)
else:
assert False
return 0
if __name__ == '__main__':
sys.exit(Main(sys.argv))

View file

@ -1,5 +0,0 @@
@rem Copyright (c) 2009 Google Inc. All rights reserved.
@rem Use of this source code is governed by a BSD-style license that can be
@rem found in the LICENSE file.
@python %~dp0/samples %*

View file

@ -4,16 +4,41 @@
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
from os import path
from setuptools import setup
here = path.abspath(path.dirname(__file__))
# Get the long description from the README file
with open(path.join(here, "README.md")) as in_file:
long_description = in_file.read()
setup(
name='gyp',
version='0.1',
description='Generate Your Projects',
author='Chromium Authors',
author_email='chromium-dev@googlegroups.com',
url='http://code.google.com/p/gyp',
package_dir = {'': 'pylib'},
packages=['gyp', 'gyp.generator'],
entry_points = {'console_scripts': ['gyp=gyp:script_main'] }
name="gyp-next",
version="0.2.0",
description="A fork of the GYP build system for use in the Node.js projects",
long_description=long_description,
long_description_content_type="text/markdown",
author="Node.js contributors",
author_email="ryzokuken@disroot.org",
url="https://github.com/nodejs/gyp-next",
package_dir={"": "pylib"},
packages=["gyp", "gyp.generator"],
entry_points={"console_scripts": ["gyp=gyp:script_main"]},
python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*",
classifiers=[
'Development Status :: 3 - Alpha',
'Environment :: Console',
'Intended Audience :: Developers',
'License :: OSI Approved :: BSD License',
'Natural Language :: English',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
],
)

269
gyp/test_gyp.py Executable file
View file

@ -0,0 +1,269 @@
#!/usr/bin/env python
# Copyright (c) 2012 Google Inc. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""gyptest.py -- test runner for GYP tests."""
from __future__ import print_function
import argparse
import os
import platform
import subprocess
import sys
import time
def is_test_name(f):
return f.startswith("gyptest") and f.endswith(".py")
def find_all_gyptest_files(directory):
result = []
for root, dirs, files in os.walk(directory):
result.extend([os.path.join(root, f) for f in files if is_test_name(f)])
result.sort()
return result
def main(argv=None):
if argv is None:
argv = sys.argv
parser = argparse.ArgumentParser()
parser.add_argument("-a", "--all", action="store_true", help="run all tests")
parser.add_argument("-C", "--chdir", action="store", help="change to directory")
parser.add_argument(
"-f",
"--format",
action="store",
default="",
help="run tests with the specified formats",
)
parser.add_argument(
"-G",
"--gyp_option",
action="append",
default=[],
help="Add -G options to the gyp command line",
)
parser.add_argument(
"-l", "--list", action="store_true", help="list available tests and exit"
)
parser.add_argument(
"-n",
"--no-exec",
action="store_true",
help="no execute, just print the command line",
)
parser.add_argument(
"--path", action="append", default=[], help="additional $PATH directory"
)
parser.add_argument(
"-q",
"--quiet",
action="store_true",
help="quiet, don't print anything unless there are failures",
)
parser.add_argument(
"-v",
"--verbose",
action="store_true",
help="print configuration info and test results.",
)
parser.add_argument("tests", nargs="*")
args = parser.parse_args(argv[1:])
if args.chdir:
os.chdir(args.chdir)
if args.path:
extra_path = [os.path.abspath(p) for p in args.path]
extra_path = os.pathsep.join(extra_path)
os.environ["PATH"] = extra_path + os.pathsep + os.environ["PATH"]
if not args.tests:
if not args.all:
sys.stderr.write("Specify -a to get all tests.\n")
return 1
args.tests = ["test"]
tests = []
for arg in args.tests:
if os.path.isdir(arg):
tests.extend(find_all_gyptest_files(os.path.normpath(arg)))
else:
if not is_test_name(os.path.basename(arg)):
print(arg, "is not a valid gyp test name.", file=sys.stderr)
sys.exit(1)
tests.append(arg)
if args.list:
for test in tests:
print(test)
sys.exit(0)
os.environ["PYTHONPATH"] = os.path.abspath("test/lib")
if args.verbose:
print_configuration_info()
if args.gyp_option and not args.quiet:
print("Extra Gyp options: %s\n" % args.gyp_option)
if args.format:
format_list = args.format.split(",")
else:
format_list = {
"aix5": ["make"],
"freebsd7": ["make"],
"freebsd8": ["make"],
"openbsd5": ["make"],
"cygwin": ["msvs"],
"win32": ["msvs", "ninja"],
"linux": ["make", "ninja"],
"linux2": ["make", "ninja"],
"linux3": ["make", "ninja"],
# TODO: Re-enable xcode-ninja.
# https://bugs.chromium.org/p/gyp/issues/detail?id=530
# 'darwin': ['make', 'ninja', 'xcode', 'xcode-ninja'],
"darwin": ["make", "ninja", "xcode"],
}[sys.platform]
gyp_options = []
for option in args.gyp_option:
gyp_options += ["-G", option]
runner = Runner(format_list, tests, gyp_options, args.verbose)
runner.run()
if not args.quiet:
runner.print_results()
if runner.failures:
return 1
else:
return 0
def print_configuration_info():
print("Test configuration:")
if sys.platform == "darwin":
sys.path.append(os.path.abspath("test/lib"))
import TestMac
print(" Mac %s %s" % (platform.mac_ver()[0], platform.mac_ver()[2]))
print(" Xcode %s" % TestMac.Xcode.Version())
elif sys.platform == "win32":
sys.path.append(os.path.abspath("pylib"))
import gyp.MSVSVersion
print(" Win %s %s\n" % platform.win32_ver()[0:2])
print(" MSVS %s" % gyp.MSVSVersion.SelectVisualStudioVersion().Description())
elif sys.platform in ("linux", "linux2"):
print(" Linux %s" % " ".join(platform.linux_distribution()))
print(" Python %s" % platform.python_version())
print(" PYTHONPATH=%s" % os.environ["PYTHONPATH"])
print()
class Runner(object):
def __init__(self, formats, tests, gyp_options, verbose):
self.formats = formats
self.tests = tests
self.verbose = verbose
self.gyp_options = gyp_options
self.failures = []
self.num_tests = len(formats) * len(tests)
num_digits = len(str(self.num_tests))
self.fmt_str = "[%%%dd/%%%dd] (%%s) %%s" % (num_digits, num_digits)
self.isatty = sys.stdout.isatty() and not self.verbose
self.env = os.environ.copy()
self.hpos = 0
def run(self):
run_start = time.time()
i = 1
for fmt in self.formats:
for test in self.tests:
self.run_test(test, fmt, i)
i += 1
if self.isatty:
self.erase_current_line()
self.took = time.time() - run_start
def run_test(self, test, fmt, i):
if self.isatty:
self.erase_current_line()
msg = self.fmt_str % (i, self.num_tests, fmt, test)
self.print_(msg)
start = time.time()
cmd = [sys.executable, test] + self.gyp_options
self.env["TESTGYP_FORMAT"] = fmt
proc = subprocess.Popen(
cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, env=self.env
)
proc.wait()
took = time.time() - start
stdout = proc.stdout.read().decode("utf8")
if proc.returncode == 2:
res = "skipped"
elif proc.returncode:
res = "failed"
self.failures.append("(%s) %s" % (test, fmt))
else:
res = "passed"
res_msg = " %s %.3fs" % (res, took)
self.print_(res_msg)
if (
stdout
and not stdout.endswith("PASSED\n")
and not (stdout.endswith("NO RESULT\n"))
):
print()
for l in stdout.splitlines():
print(" %s" % l)
elif not self.isatty:
print()
def print_(self, msg):
print(msg, end="")
index = msg.rfind("\n")
if index == -1:
self.hpos += len(msg)
else:
self.hpos = len(msg) - index
sys.stdout.flush()
def erase_current_line(self):
print("\b" * self.hpos + " " * self.hpos + "\b" * self.hpos, end="")
sys.stdout.flush()
self.hpos = 0
def print_results(self):
num_failures = len(self.failures)
if num_failures:
print()
if num_failures == 1:
print("Failed the following test:")
else:
print("Failed the following %d tests:" % num_failures)
print("\t" + "\n\t".join(sorted(self.failures)))
print()
print(
"Ran %d tests in %.3fs, %d failed."
% (self.num_tests, self.took, num_failures)
)
print()
if __name__ == "__main__":
sys.exit(main())

View file

@ -16,87 +16,88 @@ import sys
def ParseTarget(target):
target, _, suffix = target.partition('#')
filename, _, target = target.partition(':')
return filename, target, suffix
target, _, suffix = target.partition("#")
filename, _, target = target.partition(":")
return filename, target, suffix
def LoadEdges(filename, targets):
"""Load the edges map from the dump file, and filter it to only
"""Load the edges map from the dump file, and filter it to only
show targets in |targets| and their depedendents."""
file = open('dump.json')
edges = json.load(file)
file.close()
file = open("dump.json")
edges = json.load(file)
file.close()
# Copy out only the edges we're interested in from the full edge list.
target_edges = {}
to_visit = targets[:]
while to_visit:
src = to_visit.pop()
if src in target_edges:
continue
target_edges[src] = edges[src]
to_visit.extend(edges[src])
# Copy out only the edges we're interested in from the full edge list.
target_edges = {}
to_visit = targets[:]
while to_visit:
src = to_visit.pop()
if src in target_edges:
continue
target_edges[src] = edges[src]
to_visit.extend(edges[src])
return target_edges
return target_edges
def WriteGraph(edges):
"""Print a graphviz graph to stdout.
"""Print a graphviz graph to stdout.
|edges| is a map of target to a list of other targets it depends on."""
# Bucket targets by file.
files = collections.defaultdict(list)
for src, dst in edges.items():
build_file, target_name, toolset = ParseTarget(src)
files[build_file].append(src)
# Bucket targets by file.
files = collections.defaultdict(list)
for src, dst in edges.items():
build_file, target_name, toolset = ParseTarget(src)
files[build_file].append(src)
print('digraph D {')
print(' fontsize=8') # Used by subgraphs.
print(' node [fontsize=8]')
print("digraph D {")
print(" fontsize=8") # Used by subgraphs.
print(" node [fontsize=8]")
# Output nodes by file. We must first write out each node within
# its file grouping before writing out any edges that may refer
# to those nodes.
for filename, targets in files.items():
if len(targets) == 1:
# If there's only one node for this file, simplify
# the display by making it a box without an internal node.
target = targets[0]
build_file, target_name, toolset = ParseTarget(target)
print(' "%s" [shape=box, label="%s\\n%s"]' % (target, filename,
target_name))
else:
# Group multiple nodes together in a subgraph.
print(' subgraph "cluster_%s" {' % filename)
print(' label = "%s"' % filename)
for target in targets:
build_file, target_name, toolset = ParseTarget(target)
print(' "%s" [label="%s"]' % (target, target_name))
print(' }')
# Output nodes by file. We must first write out each node within
# its file grouping before writing out any edges that may refer
# to those nodes.
for filename, targets in files.items():
if len(targets) == 1:
# If there's only one node for this file, simplify
# the display by making it a box without an internal node.
target = targets[0]
build_file, target_name, toolset = ParseTarget(target)
print(
' "%s" [shape=box, label="%s\\n%s"]' % (target, filename, target_name)
)
else:
# Group multiple nodes together in a subgraph.
print(' subgraph "cluster_%s" {' % filename)
print(' label = "%s"' % filename)
for target in targets:
build_file, target_name, toolset = ParseTarget(target)
print(' "%s" [label="%s"]' % (target, target_name))
print(" }")
# Now that we've placed all the nodes within subgraphs, output all
# the edges between nodes.
for src, dsts in edges.items():
for dst in dsts:
print(' "%s" -> "%s"' % (src, dst))
# Now that we've placed all the nodes within subgraphs, output all
# the edges between nodes.
for src, dsts in edges.items():
for dst in dsts:
print(' "%s" -> "%s"' % (src, dst))
print('}')
print("}")
def main():
if len(sys.argv) < 2:
print(__doc__, file=sys.stderr)
print(file=sys.stderr)
print('usage: %s target1 target2...' % (sys.argv[0]), file=sys.stderr)
return 1
if len(sys.argv) < 2:
print(__doc__, file=sys.stderr)
print(file=sys.stderr)
print("usage: %s target1 target2..." % (sys.argv[0]), file=sys.stderr)
return 1
edges = LoadEdges('dump.json', sys.argv[1:])
edges = LoadEdges("dump.json", sys.argv[1:])
WriteGraph(edges)
return 0
WriteGraph(edges)
return 0
if __name__ == '__main__':
sys.exit(main())
if __name__ == "__main__":
sys.exit(main())

View file

@ -13,7 +13,7 @@ import re
# Regex to remove comments when we're counting braces.
COMMENT_RE = re.compile(r'\s*#.*')
COMMENT_RE = re.compile(r"\s*#.*")
# Regex to remove quoted strings when we're counting braces.
# It takes into account quoted quotes, and makes sure that the quotes match.
@ -24,45 +24,47 @@ QUOTE_RE = re.compile(QUOTE_RE_STR)
def comment_replace(matchobj):
return matchobj.group(1) + matchobj.group(2) + '#' * len(matchobj.group(3))
return matchobj.group(1) + matchobj.group(2) + "#" * len(matchobj.group(3))
def mask_comments(input):
"""Mask the quoted strings so we skip braces inside quoted strings."""
search_re = re.compile(r'(.*?)(#)(.*)')
return [search_re.sub(comment_replace, line) for line in input]
"""Mask the quoted strings so we skip braces inside quoted strings."""
search_re = re.compile(r"(.*?)(#)(.*)")
return [search_re.sub(comment_replace, line) for line in input]
def quote_replace(matchobj):
return "%s%s%s%s" % (matchobj.group(1),
matchobj.group(2),
'x'*len(matchobj.group(3)),
matchobj.group(2))
return "%s%s%s%s" % (
matchobj.group(1),
matchobj.group(2),
"x" * len(matchobj.group(3)),
matchobj.group(2),
)
def mask_quotes(input):
"""Mask the quoted strings so we skip braces inside quoted strings."""
search_re = re.compile(r'(.*?)' + QUOTE_RE_STR)
return [search_re.sub(quote_replace, line) for line in input]
"""Mask the quoted strings so we skip braces inside quoted strings."""
search_re = re.compile(r"(.*?)" + QUOTE_RE_STR)
return [search_re.sub(quote_replace, line) for line in input]
def do_split(input, masked_input, search_re):
output = []
mask_output = []
for (line, masked_line) in zip(input, masked_input):
m = search_re.match(masked_line)
while m:
split = len(m.group(1))
line = line[:split] + r'\n' + line[split:]
masked_line = masked_line[:split] + r'\n' + masked_line[split:]
m = search_re.match(masked_line)
output.extend(line.split(r'\n'))
mask_output.extend(masked_line.split(r'\n'))
return (output, mask_output)
output = []
mask_output = []
for (line, masked_line) in zip(input, masked_input):
m = search_re.match(masked_line)
while m:
split = len(m.group(1))
line = line[:split] + r"\n" + line[split:]
masked_line = masked_line[:split] + r"\n" + masked_line[split:]
m = search_re.match(masked_line)
output.extend(line.split(r"\n"))
mask_output.extend(masked_line.split(r"\n"))
return (output, mask_output)
def split_double_braces(input):
"""Masks out the quotes and comments, and then splits appropriate
"""Masks out the quotes and comments, and then splits appropriate
lines (lines that matche the double_*_brace re's above) before
indenting them below.
@ -70,88 +72,86 @@ def split_double_braces(input):
that the indentation looks prettier when all laid out (e.g. closing
braces make a nice diagonal line).
"""
double_open_brace_re = re.compile(r'(.*?[\[\{\(,])(\s*)([\[\{\(])')
double_close_brace_re = re.compile(r'(.*?[\]\}\)],?)(\s*)([\]\}\)])')
double_open_brace_re = re.compile(r"(.*?[\[\{\(,])(\s*)([\[\{\(])")
double_close_brace_re = re.compile(r"(.*?[\]\}\)],?)(\s*)([\]\}\)])")
masked_input = mask_quotes(input)
masked_input = mask_comments(masked_input)
masked_input = mask_quotes(input)
masked_input = mask_comments(masked_input)
(output, mask_output) = do_split(input, masked_input, double_open_brace_re)
(output, mask_output) = do_split(output, mask_output, double_close_brace_re)
(output, mask_output) = do_split(input, masked_input, double_open_brace_re)
(output, mask_output) = do_split(output, mask_output, double_close_brace_re)
return output
return output
def count_braces(line):
"""keeps track of the number of braces on a given line and returns the result.
"""keeps track of the number of braces on a given line and returns the result.
It starts at zero and subtracts for closed braces, and adds for open braces.
"""
open_braces = ['[', '(', '{']
close_braces = [']', ')', '}']
closing_prefix_re = re.compile(r'(.*?[^\s\]\}\)]+.*?)([\]\}\)],?)\s*$')
cnt = 0
stripline = COMMENT_RE.sub(r'', line)
stripline = QUOTE_RE.sub(r"''", stripline)
for char in stripline:
for brace in open_braces:
if char == brace:
cnt += 1
for brace in close_braces:
if char == brace:
cnt -= 1
open_braces = ["[", "(", "{"]
close_braces = ["]", ")", "}"]
closing_prefix_re = re.compile(r"(.*?[^\s\]\}\)]+.*?)([\]\}\)],?)\s*$")
cnt = 0
stripline = COMMENT_RE.sub(r"", line)
stripline = QUOTE_RE.sub(r"''", stripline)
for char in stripline:
for brace in open_braces:
if char == brace:
cnt += 1
for brace in close_braces:
if char == brace:
cnt -= 1
after = False
if cnt > 0:
after = True
after = False
if cnt > 0:
after = True
# This catches the special case of a closing brace having something
# other than just whitespace ahead of it -- we don't want to
# unindent that until after this line is printed so it stays with
# the previous indentation level.
if cnt < 0 and closing_prefix_re.match(stripline):
after = True
return (cnt, after)
# This catches the special case of a closing brace having something
# other than just whitespace ahead of it -- we don't want to
# unindent that until after this line is printed so it stays with
# the previous indentation level.
if cnt < 0 and closing_prefix_re.match(stripline):
after = True
return (cnt, after)
def prettyprint_input(lines):
"""Does the main work of indenting the input based on the brace counts."""
indent = 0
basic_offset = 2
last_line = ""
for line in lines:
if COMMENT_RE.match(line):
print(line)
else:
line = line.strip('\r\n\t ') # Otherwise doesn't strip \r on Unix.
if len(line) > 0:
(brace_diff, after) = count_braces(line)
if brace_diff != 0:
if after:
print(" " * (basic_offset * indent) + line)
indent += brace_diff
else:
indent += brace_diff
print(" " * (basic_offset * indent) + line)
"""Does the main work of indenting the input based on the brace counts."""
indent = 0
basic_offset = 2
for line in lines:
if COMMENT_RE.match(line):
print(line)
else:
print(" " * (basic_offset * indent) + line)
else:
print("")
last_line = line
line = line.strip("\r\n\t ") # Otherwise doesn't strip \r on Unix.
if len(line) > 0:
(brace_diff, after) = count_braces(line)
if brace_diff != 0:
if after:
print(" " * (basic_offset * indent) + line)
indent += brace_diff
else:
indent += brace_diff
print(" " * (basic_offset * indent) + line)
else:
print(" " * (basic_offset * indent) + line)
else:
print("")
def main():
if len(sys.argv) > 1:
data = open(sys.argv[1]).read().splitlines()
else:
data = sys.stdin.read().splitlines()
# Split up the double braces.
lines = split_double_braces(data)
if len(sys.argv) > 1:
data = open(sys.argv[1]).read().splitlines()
else:
data = sys.stdin.read().splitlines()
# Split up the double braces.
lines = split_double_braces(data)
# Indent and print the output.
prettyprint_input(lines)
return 0
# Indent and print the output.
prettyprint_input(lines)
return 0
if __name__ == '__main__':
sys.exit(main())
if __name__ == "__main__":
sys.exit(main())

View file

@ -19,153 +19,164 @@ import re
import sys
import pretty_vcproj
__author__ = 'nsylvain (Nicolas Sylvain)'
__author__ = "nsylvain (Nicolas Sylvain)"
def BuildProject(project, built, projects, deps):
# if all dependencies are done, we can build it, otherwise we try to build the
# dependency.
# This is not infinite-recursion proof.
for dep in deps[project]:
if dep not in built:
BuildProject(dep, built, projects, deps)
print(project)
built.append(project)
# if all dependencies are done, we can build it, otherwise we try to build the
# dependency.
# This is not infinite-recursion proof.
for dep in deps[project]:
if dep not in built:
BuildProject(dep, built, projects, deps)
print(project)
built.append(project)
def ParseSolution(solution_file):
# All projects, their clsid and paths.
projects = dict()
# All projects, their clsid and paths.
projects = dict()
# A list of dependencies associated with a project.
dependencies = dict()
# A list of dependencies associated with a project.
dependencies = dict()
# Regular expressions that matches the SLN format.
# The first line of a project definition.
begin_project = re.compile(r'^Project\("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942'
r'}"\) = "(.*)", "(.*)", "(.*)"$')
# The last line of a project definition.
end_project = re.compile('^EndProject$')
# The first line of a dependency list.
begin_dep = re.compile(
r'ProjectSection\(ProjectDependencies\) = postProject$')
# The last line of a dependency list.
end_dep = re.compile('EndProjectSection$')
# A line describing a dependency.
dep_line = re.compile(' *({.*}) = ({.*})$')
# Regular expressions that matches the SLN format.
# The first line of a project definition.
begin_project = re.compile(
r'^Project\("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942'
r'}"\) = "(.*)", "(.*)", "(.*)"$'
)
# The last line of a project definition.
end_project = re.compile("^EndProject$")
# The first line of a dependency list.
begin_dep = re.compile(r"ProjectSection\(ProjectDependencies\) = postProject$")
# The last line of a dependency list.
end_dep = re.compile("EndProjectSection$")
# A line describing a dependency.
dep_line = re.compile(" *({.*}) = ({.*})$")
in_deps = False
solution = open(solution_file)
for line in solution:
results = begin_project.search(line)
if results:
# Hack to remove icu because the diff is too different.
if results.group(1).find('icu') != -1:
continue
# We remove "_gyp" from the names because it helps to diff them.
current_project = results.group(1).replace('_gyp', '')
projects[current_project] = [results.group(2).replace('_gyp', ''),
results.group(3),
results.group(2)]
dependencies[current_project] = []
continue
in_deps = False
solution = open(solution_file)
for line in solution:
results = begin_project.search(line)
if results:
# Hack to remove icu because the diff is too different.
if results.group(1).find("icu") != -1:
continue
# We remove "_gyp" from the names because it helps to diff them.
current_project = results.group(1).replace("_gyp", "")
projects[current_project] = [
results.group(2).replace("_gyp", ""),
results.group(3),
results.group(2),
]
dependencies[current_project] = []
continue
results = end_project.search(line)
if results:
current_project = None
continue
results = end_project.search(line)
if results:
current_project = None
continue
results = begin_dep.search(line)
if results:
in_deps = True
continue
results = begin_dep.search(line)
if results:
in_deps = True
continue
results = end_dep.search(line)
if results:
in_deps = False
continue
results = end_dep.search(line)
if results:
in_deps = False
continue
results = dep_line.search(line)
if results and in_deps and current_project:
dependencies[current_project].append(results.group(1))
continue
results = dep_line.search(line)
if results and in_deps and current_project:
dependencies[current_project].append(results.group(1))
continue
# Change all dependencies clsid to name instead.
for project in dependencies:
# For each dependencies in this project
new_dep_array = []
for dep in dependencies[project]:
# Look for the project name matching this cldis
for project_info in projects:
if projects[project_info][1] == dep:
new_dep_array.append(project_info)
dependencies[project] = sorted(new_dep_array)
# Change all dependencies clsid to name instead.
for project in dependencies:
# For each dependencies in this project
new_dep_array = []
for dep in dependencies[project]:
# Look for the project name matching this cldis
for project_info in projects:
if projects[project_info][1] == dep:
new_dep_array.append(project_info)
dependencies[project] = sorted(new_dep_array)
return (projects, dependencies)
return (projects, dependencies)
def PrintDependencies(projects, deps):
print("---------------------------------------")
print("Dependencies for all projects")
print("---------------------------------------")
print("-- --")
print("---------------------------------------")
print("Dependencies for all projects")
print("---------------------------------------")
print("-- --")
for (project, dep_list) in sorted(deps.items()):
print("Project : %s" % project)
print("Path : %s" % projects[project][0])
if dep_list:
for dep in dep_list:
print(" - %s" % dep)
print("")
for (project, dep_list) in sorted(deps.items()):
print("Project : %s" % project)
print("Path : %s" % projects[project][0])
if dep_list:
for dep in dep_list:
print(" - %s" % dep)
print("")
print("-- --")
print("-- --")
def PrintBuildOrder(projects, deps):
print("---------------------------------------")
print("Build order ")
print("---------------------------------------")
print("-- --")
print("---------------------------------------")
print("Build order ")
print("---------------------------------------")
print("-- --")
built = []
for (project, _) in sorted(deps.items()):
if project not in built:
BuildProject(project, built, projects, deps)
built = []
for (project, _) in sorted(deps.items()):
if project not in built:
BuildProject(project, built, projects, deps)
print("-- --")
print("-- --")
def PrintVCProj(projects):
for project in projects:
print("-------------------------------------")
print("-------------------------------------")
print(project)
print(project)
print(project)
print("-------------------------------------")
print("-------------------------------------")
for project in projects:
print("-------------------------------------")
print("-------------------------------------")
print(project)
print(project)
print(project)
print("-------------------------------------")
print("-------------------------------------")
project_path = os.path.abspath(os.path.join(os.path.dirname(sys.argv[1]),
projects[project][2]))
project_path = os.path.abspath(
os.path.join(os.path.dirname(sys.argv[1]), projects[project][2])
)
pretty = pretty_vcproj
argv = [
"",
project_path,
"$(SolutionDir)=%s\\" % os.path.dirname(sys.argv[1]),
]
argv.extend(sys.argv[3:])
pretty.main(argv)
pretty = pretty_vcproj
argv = [ '',
project_path,
'$(SolutionDir)=%s\\' % os.path.dirname(sys.argv[1]),
]
argv.extend(sys.argv[3:])
pretty.main(argv)
def main():
# check if we have exactly 1 parameter.
if len(sys.argv) < 2:
print('Usage: %s "c:\\path\\to\\project.sln"' % sys.argv[0])
return 1
# check if we have exactly 1 parameter.
if len(sys.argv) < 2:
print('Usage: %s "c:\\path\\to\\project.sln"' % sys.argv[0])
return 1
(projects, deps) = ParseSolution(sys.argv[1])
PrintDependencies(projects, deps)
PrintBuildOrder(projects, deps)
(projects, deps) = ParseSolution(sys.argv[1])
PrintDependencies(projects, deps)
PrintBuildOrder(projects, deps)
if '--recursive' in sys.argv:
PrintVCProj(projects)
return 0
if "--recursive" in sys.argv:
PrintVCProj(projects)
return 0
if __name__ == '__main__':
sys.exit(main())
if __name__ == "__main__":
sys.exit(main())

View file

@ -20,318 +20,326 @@ import sys
from xml.dom.minidom import parse
from xml.dom.minidom import Node
__author__ = 'nsylvain (Nicolas Sylvain)'
__author__ = "nsylvain (Nicolas Sylvain)"
try:
cmp
cmp
except NameError:
def cmp(x, y):
return (x > y) - (x < y)
def cmp(x, y):
return (x > y) - (x < y)
REPLACEMENTS = dict()
ARGUMENTS = None
class CmpTuple(object):
"""Compare function between 2 tuple."""
def __call__(self, x, y):
return cmp(x[0], y[0])
"""Compare function between 2 tuple."""
def __call__(self, x, y):
return cmp(x[0], y[0])
class CmpNode(object):
"""Compare function between 2 xml nodes."""
"""Compare function between 2 xml nodes."""
def __call__(self, x, y):
def get_string(node):
node_string = "node"
node_string += node.nodeName
if node.nodeValue:
node_string += node.nodeValue
def __call__(self, x, y):
def get_string(node):
node_string = "node"
node_string += node.nodeName
if node.nodeValue:
node_string += node.nodeValue
if node.attributes:
# We first sort by name, if present.
node_string += node.getAttribute("Name")
if node.attributes:
# We first sort by name, if present.
node_string += node.getAttribute("Name")
all_nodes = []
for (name, value) in node.attributes.items():
all_nodes.append((name, value))
all_nodes = []
for (name, value) in node.attributes.items():
all_nodes.append((name, value))
all_nodes.sort(CmpTuple())
for (name, value) in all_nodes:
node_string += name
node_string += value
all_nodes.sort(CmpTuple())
for (name, value) in all_nodes:
node_string += name
node_string += value
return node_string
return node_string
return cmp(get_string(x), get_string(y))
return cmp(get_string(x), get_string(y))
def PrettyPrintNode(node, indent=0):
if node.nodeType == Node.TEXT_NODE:
if node.data.strip():
print('%s%s' % (' '*indent, node.data.strip()))
return
if node.nodeType == Node.TEXT_NODE:
if node.data.strip():
print("%s%s" % (" " * indent, node.data.strip()))
return
if node.childNodes:
node.normalize()
# Get the number of attributes
attr_count = 0
if node.attributes:
attr_count = node.attributes.length
if node.childNodes:
node.normalize()
# Get the number of attributes
attr_count = 0
if node.attributes:
attr_count = node.attributes.length
# Print the main tag
if attr_count == 0:
print('%s<%s>' % (' '*indent, node.nodeName))
else:
print('%s<%s' % (' '*indent, node.nodeName))
# Print the main tag
if attr_count == 0:
print("%s<%s>" % (" " * indent, node.nodeName))
else:
print("%s<%s" % (" " * indent, node.nodeName))
all_attributes = []
for (name, value) in node.attributes.items():
all_attributes.append((name, value))
all_attributes.sort(CmpTuple())
for (name, value) in all_attributes:
print('%s %s="%s"' % (' '*indent, name, value))
print('%s>' % (' '*indent))
if node.nodeValue:
print('%s %s' % (' '*indent, node.nodeValue))
all_attributes = []
for (name, value) in node.attributes.items():
all_attributes.append((name, value))
all_attributes.sort(CmpTuple())
for (name, value) in all_attributes:
print('%s %s="%s"' % (" " * indent, name, value))
print("%s>" % (" " * indent))
if node.nodeValue:
print("%s %s" % (" " * indent, node.nodeValue))
for sub_node in node.childNodes:
PrettyPrintNode(sub_node, indent=indent+2)
print('%s</%s>' % (' '*indent, node.nodeName))
for sub_node in node.childNodes:
PrettyPrintNode(sub_node, indent=indent + 2)
print("%s</%s>" % (" " * indent, node.nodeName))
def FlattenFilter(node):
"""Returns a list of all the node and sub nodes."""
node_list = []
"""Returns a list of all the node and sub nodes."""
node_list = []
if (node.attributes and
node.getAttribute('Name') == '_excluded_files'):
# We don't add the "_excluded_files" filter.
return []
if node.attributes and node.getAttribute("Name") == "_excluded_files":
# We don't add the "_excluded_files" filter.
return []
for current in node.childNodes:
if current.nodeName == 'Filter':
node_list.extend(FlattenFilter(current))
else:
node_list.append(current)
for current in node.childNodes:
if current.nodeName == "Filter":
node_list.extend(FlattenFilter(current))
else:
node_list.append(current)
return node_list
return node_list
def FixFilenames(filenames, current_directory):
new_list = []
for filename in filenames:
if filename:
for key in REPLACEMENTS:
filename = filename.replace(key, REPLACEMENTS[key])
os.chdir(current_directory)
filename = filename.strip('"\' ')
if filename.startswith('$'):
new_list.append(filename)
else:
new_list.append(os.path.abspath(filename))
return new_list
new_list = []
for filename in filenames:
if filename:
for key in REPLACEMENTS:
filename = filename.replace(key, REPLACEMENTS[key])
os.chdir(current_directory)
filename = filename.strip("\"' ")
if filename.startswith("$"):
new_list.append(filename)
else:
new_list.append(os.path.abspath(filename))
return new_list
def AbsoluteNode(node):
"""Makes all the properties we know about in this node absolute."""
if node.attributes:
for (name, value) in node.attributes.items():
if name in ['InheritedPropertySheets', 'RelativePath',
'AdditionalIncludeDirectories',
'IntermediateDirectory', 'OutputDirectory',
'AdditionalLibraryDirectories']:
# We want to fix up these paths
path_list = value.split(';')
new_list = FixFilenames(path_list, os.path.dirname(ARGUMENTS[1]))
node.setAttribute(name, ';'.join(new_list))
if not value:
node.removeAttribute(name)
"""Makes all the properties we know about in this node absolute."""
if node.attributes:
for (name, value) in node.attributes.items():
if name in [
"InheritedPropertySheets",
"RelativePath",
"AdditionalIncludeDirectories",
"IntermediateDirectory",
"OutputDirectory",
"AdditionalLibraryDirectories",
]:
# We want to fix up these paths
path_list = value.split(";")
new_list = FixFilenames(path_list, os.path.dirname(ARGUMENTS[1]))
node.setAttribute(name, ";".join(new_list))
if not value:
node.removeAttribute(name)
def CleanupVcproj(node):
"""For each sub node, we call recursively this function."""
for sub_node in node.childNodes:
AbsoluteNode(sub_node)
CleanupVcproj(sub_node)
"""For each sub node, we call recursively this function."""
for sub_node in node.childNodes:
AbsoluteNode(sub_node)
CleanupVcproj(sub_node)
# Normalize the node, and remove all extranous whitespaces.
for sub_node in node.childNodes:
if sub_node.nodeType == Node.TEXT_NODE:
sub_node.data = sub_node.data.replace("\r", "")
sub_node.data = sub_node.data.replace("\n", "")
sub_node.data = sub_node.data.rstrip()
# Normalize the node, and remove all extranous whitespaces.
for sub_node in node.childNodes:
if sub_node.nodeType == Node.TEXT_NODE:
sub_node.data = sub_node.data.replace("\r", "")
sub_node.data = sub_node.data.replace("\n", "")
sub_node.data = sub_node.data.rstrip()
# Fix all the semicolon separated attributes to be sorted, and we also
# remove the dups.
if node.attributes:
for (name, value) in node.attributes.items():
sorted_list = sorted(value.split(';'))
unique_list = []
for i in sorted_list:
if not unique_list.count(i):
unique_list.append(i)
node.setAttribute(name, ';'.join(unique_list))
if not value:
node.removeAttribute(name)
# Fix all the semicolon separated attributes to be sorted, and we also
# remove the dups.
if node.attributes:
for (name, value) in node.attributes.items():
sorted_list = sorted(value.split(";"))
unique_list = []
for i in sorted_list:
if not unique_list.count(i):
unique_list.append(i)
node.setAttribute(name, ";".join(unique_list))
if not value:
node.removeAttribute(name)
if node.childNodes:
node.normalize()
if node.childNodes:
node.normalize()
# For each node, take a copy, and remove it from the list.
node_array = []
while node.childNodes and node.childNodes[0]:
# Take a copy of the node and remove it from the list.
current = node.childNodes[0]
node.removeChild(current)
# For each node, take a copy, and remove it from the list.
node_array = []
while node.childNodes and node.childNodes[0]:
# Take a copy of the node and remove it from the list.
current = node.childNodes[0]
node.removeChild(current)
# If the child is a filter, we want to append all its children
# to this same list.
if current.nodeName == 'Filter':
node_array.extend(FlattenFilter(current))
else:
node_array.append(current)
# If the child is a filter, we want to append all its children
# to this same list.
if current.nodeName == "Filter":
node_array.extend(FlattenFilter(current))
else:
node_array.append(current)
# Sort the list.
node_array.sort(CmpNode())
# Sort the list.
node_array.sort(CmpNode())
# Insert the nodes in the correct order.
for new_node in node_array:
# But don't append empty tool node.
if new_node.nodeName == 'Tool':
if new_node.attributes and new_node.attributes.length == 1:
# This one was empty.
continue
if new_node.nodeName == 'UserMacro':
continue
node.appendChild(new_node)
# Insert the nodes in the correct order.
for new_node in node_array:
# But don't append empty tool node.
if new_node.nodeName == "Tool":
if new_node.attributes and new_node.attributes.length == 1:
# This one was empty.
continue
if new_node.nodeName == "UserMacro":
continue
node.appendChild(new_node)
def GetConfiguationNodes(vcproj):
#TODO(nsylvain): Find a better way to navigate the xml.
nodes = []
for node in vcproj.childNodes:
if node.nodeName == "Configurations":
for sub_node in node.childNodes:
if sub_node.nodeName == "Configuration":
nodes.append(sub_node)
# TODO(nsylvain): Find a better way to navigate the xml.
nodes = []
for node in vcproj.childNodes:
if node.nodeName == "Configurations":
for sub_node in node.childNodes:
if sub_node.nodeName == "Configuration":
nodes.append(sub_node)
return nodes
return nodes
def GetChildrenVsprops(filename):
dom = parse(filename)
if dom.documentElement.attributes:
vsprops = dom.documentElement.getAttribute('InheritedPropertySheets')
return FixFilenames(vsprops.split(';'), os.path.dirname(filename))
return []
dom = parse(filename)
if dom.documentElement.attributes:
vsprops = dom.documentElement.getAttribute("InheritedPropertySheets")
return FixFilenames(vsprops.split(";"), os.path.dirname(filename))
return []
def SeekToNode(node1, child2):
# A text node does not have properties.
if child2.nodeType == Node.TEXT_NODE:
# A text node does not have properties.
if child2.nodeType == Node.TEXT_NODE:
return None
# Get the name of the current node.
current_name = child2.getAttribute("Name")
if not current_name:
# There is no name. We don't know how to merge.
return None
# Look through all the nodes to find a match.
for sub_node in node1.childNodes:
if sub_node.nodeName == child2.nodeName:
name = sub_node.getAttribute("Name")
if name == current_name:
return sub_node
# No match. We give up.
return None
# Get the name of the current node.
current_name = child2.getAttribute("Name")
if not current_name:
# There is no name. We don't know how to merge.
return None
# Look through all the nodes to find a match.
for sub_node in node1.childNodes:
if sub_node.nodeName == child2.nodeName:
name = sub_node.getAttribute("Name")
if name == current_name:
return sub_node
# No match. We give up.
return None
def MergeAttributes(node1, node2):
# No attributes to merge?
if not node2.attributes:
return
# No attributes to merge?
if not node2.attributes:
return
for (name, value2) in node2.attributes.items():
# Don't merge the 'Name' attribute.
if name == 'Name':
continue
value1 = node1.getAttribute(name)
if value1:
# The attribute exist in the main node. If it's equal, we leave it
# untouched, otherwise we concatenate it.
if value1 != value2:
node1.setAttribute(name, ';'.join([value1, value2]))
else:
# The attribute does not exist in the main node. We append this one.
node1.setAttribute(name, value2)
for (name, value2) in node2.attributes.items():
# Don't merge the 'Name' attribute.
if name == "Name":
continue
value1 = node1.getAttribute(name)
if value1:
# The attribute exist in the main node. If it's equal, we leave it
# untouched, otherwise we concatenate it.
if value1 != value2:
node1.setAttribute(name, ";".join([value1, value2]))
else:
# The attribute does not exist in the main node. We append this one.
node1.setAttribute(name, value2)
# If the attribute was a property sheet attributes, we remove it, since
# they are useless.
if name == 'InheritedPropertySheets':
node1.removeAttribute(name)
# If the attribute was a property sheet attributes, we remove it, since
# they are useless.
if name == "InheritedPropertySheets":
node1.removeAttribute(name)
def MergeProperties(node1, node2):
MergeAttributes(node1, node2)
for child2 in node2.childNodes:
child1 = SeekToNode(node1, child2)
if child1:
MergeProperties(child1, child2)
else:
node1.appendChild(child2.cloneNode(True))
MergeAttributes(node1, node2)
for child2 in node2.childNodes:
child1 = SeekToNode(node1, child2)
if child1:
MergeProperties(child1, child2)
else:
node1.appendChild(child2.cloneNode(True))
def main(argv):
"""Main function of this vcproj prettifier."""
global ARGUMENTS
ARGUMENTS = argv
"""Main function of this vcproj prettifier."""
global ARGUMENTS
ARGUMENTS = argv
# check if we have exactly 1 parameter.
if len(argv) < 2:
print('Usage: %s "c:\\path\\to\\vcproj.vcproj" [key1=value1] '
'[key2=value2]' % argv[0])
return 1
# check if we have exactly 1 parameter.
if len(argv) < 2:
print(
'Usage: %s "c:\\path\\to\\vcproj.vcproj" [key1=value1] '
"[key2=value2]" % argv[0]
)
return 1
# Parse the keys
for i in range(2, len(argv)):
(key, value) = argv[i].split('=')
REPLACEMENTS[key] = value
# Parse the keys
for i in range(2, len(argv)):
(key, value) = argv[i].split("=")
REPLACEMENTS[key] = value
# Open the vcproj and parse the xml.
dom = parse(argv[1])
# Open the vcproj and parse the xml.
dom = parse(argv[1])
# First thing we need to do is find the Configuration Node and merge them
# with the vsprops they include.
for configuration_node in GetConfiguationNodes(dom.documentElement):
# Get the property sheets associated with this configuration.
vsprops = configuration_node.getAttribute('InheritedPropertySheets')
# First thing we need to do is find the Configuration Node and merge them
# with the vsprops they include.
for configuration_node in GetConfiguationNodes(dom.documentElement):
# Get the property sheets associated with this configuration.
vsprops = configuration_node.getAttribute("InheritedPropertySheets")
# Fix the filenames to be absolute.
vsprops_list = FixFilenames(vsprops.strip().split(';'),
os.path.dirname(argv[1]))
# Fix the filenames to be absolute.
vsprops_list = FixFilenames(
vsprops.strip().split(";"), os.path.dirname(argv[1])
)
# Extend the list of vsprops with all vsprops contained in the current
# vsprops.
for current_vsprops in vsprops_list:
vsprops_list.extend(GetChildrenVsprops(current_vsprops))
# Extend the list of vsprops with all vsprops contained in the current
# vsprops.
for current_vsprops in vsprops_list:
vsprops_list.extend(GetChildrenVsprops(current_vsprops))
# Now that we have all the vsprops, we need to merge them.
for current_vsprops in vsprops_list:
MergeProperties(configuration_node,
parse(current_vsprops).documentElement)
# Now that we have all the vsprops, we need to merge them.
for current_vsprops in vsprops_list:
MergeProperties(configuration_node, parse(current_vsprops).documentElement)
# Now that everything is merged, we need to cleanup the xml.
CleanupVcproj(dom.documentElement)
# Now that everything is merged, we need to cleanup the xml.
CleanupVcproj(dom.documentElement)
# Finally, we use the prett xml function to print the vcproj back to the
# user.
#print dom.toprettyxml(newl="\n")
PrettyPrintNode(dom.documentElement)
return 0
# Finally, we use the prett xml function to print the vcproj back to the
# user.
# print dom.toprettyxml(newl="\n")
PrettyPrintNode(dom.documentElement)
return 0
if __name__ == '__main__':
sys.exit(main(sys.argv))
if __name__ == "__main__":
sys.exit(main(sys.argv))