16Sep2019 - detailed test results for TextTest

To reconnect the TextTest GUI to these results, run:

texttest -a texttest -c /home/delphi/texttest/texttest -d /home/delphi/texttest/selftest -reconnect /home/delphi/.texttest/tmp/texttest.01Oct000004.31161 -g

To start TextTest for these tests, run:

texttest -a texttest -c /home/delphi/texttest/texttest -d /home/delphi/texttest/selftest


default: 852 tests: 794 succeeded 49 killed 9 FAILED

Detailed information for the tests that FAILED:

TEST FAILED on ts-sim-scenario-ba : TestSelf KnownBugs BugzillaPlugin Version2 BadScript ( Last six runs Sep2019 )

---------- Differences in errors ----------
0a1,45
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/plugins.py", line <filtered>, in importAndCall
>     exec(command, globals(), namespace)
>   File "<string>", line 1, in <module>
> ModuleNotFoundError: No module named 'default'
> 
> During handling of the above exception, another exception occurred:
> 
> Traceback (most recent call last):
>   File "<filtered>/testmodel.py", line <filtered>, in makeConfigObject
>     return plugins.importAndCall(moduleName, "getConfig", self.inputOptions)
>   File "<filtered>/plugins.py", line <filtered>, in importAndCall
>     exec(command, globals(), namespace)
>   File "<string>", line 1, in <module>
>   File "<filtered>/__init__.py", line <filtered>, in <module>
>     import texttestlib.default.batch
>   File "<filtered>/__init__.py", line <filtered>, in <module>
>     from texttestlib.default.batch import testoverview
>   File "<filtered>/testoverview.py", line <filtered>, in <module>
>     from texttestlib.default.batch import HTMLgen, HTMLcolors, jenkinschanges
>   File "<filtered>/HTMLgen.py", line <filtered>, in <module>
>     from .imgsize import imgsize
>   File "<filtered>/imgsize.py", line <filtered>, in <module>
>     from PIL import Image
>   File "<filtered>/Image.py", line <filtered>, in <module>
>     from pathlib import Path
>   File "<filtered>/pathlib.py", line <filtered>, in <module>
>     from urllib.parse import quote_from_bytes as urlquote_from_bytes
<truncated after showing first 30 lines>
---------- Differences in catalogue ----------
1,11c1
< The following new files/directories were created:
< <Test Directory>
< ----texttesttmp
< --------local.<datetime>.<pid>
< ------------batchreport.hello
< ------------hello
< ----------------Test
< --------------------errors.hello
< --------------------output.hello
< --------------------framework_tmp
< ------------------------teststate
---
> No files or directories were created, edited or deleted.
---------- Differences in output ----------
1,14d0
< Using Application HELLO
< Running HELLO test-suite TargetApp
<   Running HELLO test-case Test
< Comparing differences for HELLO test-suite TargetApp
<   Comparing differences for HELLO test-case Test (on errors.hello,output.hello)
<   HELLO test-case Test had known bugs (bug 42 (BAD SCRIPT)) on <machine> : differences in output
< Results:
< 
< Tests that did not succeed:
<   HELLO test-case Test had known bugs (bug 42 (BAD SCRIPT)) on <machine> : differences in output
< 
< Tests Run: 1, Known Bugs: 1
< Creating batch report for application HELLO ...
< File written.
---------- Differences in pythonmocks ----------
2,7d1
< <-PYT:import urllib.request
< <-PYT:import urllib.error
< <-PYT:urllib.request.urlopen('http://bugzilla.myveryoldsite.com/cli.cgi?bug=42')
< ->RET:raise Instance('URLError(IOError)', 'urlerror1')
< <-PYT:urlerror1.__str__()
< ->RET:'<urlopen error [Errno -2] Name or service not known>'
---------- New result in exitcode ----------
1
---------- Missing result in targetReport ----------
known bugs=1
<today's date>
HELLO : 1 tests : 1 known bugs

Summary of all Unsuccessful tests follows...
---------------------------------------------------------------------------------
The following tests had known bugs : 
In HELLO test-suite TargetApp:
  - HELLO test-case Test : bug 42 (BAD SCRIPT)


Details of all Unsuccessful tests follows...
---------------------------------------------------------------------------------

Detailed information for the tests that had known bugs follows...
--------------------------------------------------------
TEST had known bugs (bug 42 (BAD SCRIPT)) on : HELLO test-case Test (under Test)
Failed to open URL 'http://bugzilla.myveryoldsite.com/cli.cgi?bug=42': <urlopen error [Errno -2] Name or service not known>.

Please make sure that the configuration entry 'bug_system_location' points to the correct script to run to extract bugzilla information. The current value is 'http://bugzilla.myveryoldsite.com'.
(This bug was triggered by text found in file 'output' matching 'after [0-9]* seconds')
---------- Differences in output ----------
1c1
< Hello world
---
> Hello after 5 seconds

TEST FAILED on ts-sim-scenario-ba : TestSelf KnownBugs BugzillaPlugin Version2 ClosedBug ( Last six runs Sep2019 )

---------- Differences in errors ----------
0a1,45
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/plugins.py", line <filtered>, in importAndCall
>     exec(command, globals(), namespace)
>   File "<string>", line 1, in <module>
> ModuleNotFoundError: No module named 'default'
> 
> During handling of the above exception, another exception occurred:
> 
> Traceback (most recent call last):
>   File "<filtered>/testmodel.py", line <filtered>, in makeConfigObject
>     return plugins.importAndCall(moduleName, "getConfig", self.inputOptions)
>   File "<filtered>/plugins.py", line <filtered>, in importAndCall
>     exec(command, globals(), namespace)
>   File "<string>", line 1, in <module>
>   File "<filtered>/__init__.py", line <filtered>, in <module>
>     import texttestlib.default.batch
>   File "<filtered>/__init__.py", line <filtered>, in <module>
>     from texttestlib.default.batch import testoverview
>   File "<filtered>/testoverview.py", line <filtered>, in <module>
>     from texttestlib.default.batch import HTMLgen, HTMLcolors, jenkinschanges
>   File "<filtered>/HTMLgen.py", line <filtered>, in <module>
>     from .imgsize import imgsize
>   File "<filtered>/imgsize.py", line <filtered>, in <module>
>     from PIL import Image
>   File "<filtered>/Image.py", line <filtered>, in <module>
>     from pathlib import Path
>   File "<filtered>/pathlib.py", line <filtered>, in <module>
>     from urllib.parse import quote_from_bytes as urlquote_from_bytes
<truncated after showing first 30 lines>
---------- Differences in catalogue ----------
1,11c1
< The following new files/directories were created:
< <Test Directory>
< ----texttesttmp
< --------local.<datetime>.<pid>
< ------------batchreport.hello
< ------------hello
< ----------------Test
< --------------------errors.hello
< --------------------output.hello
< --------------------framework_tmp
< ------------------------teststate
---
> No files or directories were created, edited or deleted.
---------- Differences in output ----------
1,14d0
< Using Application HELLO
< Running HELLO test-suite TargetApp
<   Running HELLO test-case Test
< Comparing differences for HELLO test-suite TargetApp
<   Comparing differences for HELLO test-case Test (on errors.hello,output.hello)
<   HELLO test-case Test had internal errors (bug 42 (CLOSED)) on <machine> : differences in output
< Results:
< 
< Tests that did not succeed:
<   HELLO test-case Test had internal errors (bug 42 (CLOSED)) on <machine> : differences in output
< 
< Tests Run: 1, Failures: 1
< Creating batch report for application HELLO ...
< File written.
---------- Differences in pythonmocks ----------
2,7d1
< <-PYT:import urllib.request
< <-PYT:import urllib.error
< <-PYT:urllib.request.urlopen('http://bugzilla.myveryoldsite.com/cli.cgi?bug=42')
< ->RET:Instance('addinfourl', 'addinfourl1')
< <-PYT:addinfourl1.read()
< ->RET:'hello:jaeger:4:jaeger:Not greeting the world properly:jaeger:2003-04-14 11:54:32:jaeger:CLOSED:jaeger:enhancement:jaeger:265:jaeger:\nIt\'s meant to be a hello world program!:jaeger:'
---------- Missing result in targetReport ----------
internal errors=1
<today's date>
HELLO : 1 tests : 1 internal errors

Summary of all Unsuccessful tests follows...
---------------------------------------------------------------------------------
The following tests had internal errors : 
In HELLO test-suite TargetApp:
  - HELLO test-case Test : bug 42 (CLOSED)


Details of all Unsuccessful tests follows...
---------------------------------------------------------------------------------

Detailed information for the tests that had internal errors follows...
--------------------------------------------------------
TEST had internal errors (bug 42 (CLOSED)) on : HELLO test-case Test (under Test)
******************************************************
BugId: 42          Assigned: 265
Severity: enhancement  Status: CLOSED
Priority: 4     Created: 2003-04-14 11:54:32
Component: hello
Summary: Not greeting the world properly
Description:

It's meant to be a hello world program!
******************************************************
(This bug was triggered by text found in file 'output' matching 'after [0-9]* seconds')
---------- Differences in output ----------
1c1
<truncated after showing first 30 lines>

TEST FAILED on ts-sim-scenario-ba : TestSelf KnownBugs BugzillaPlugin Version2 IncompatibleVersion ( Last six runs Sep2019 )

---------- Differences in errors ----------
0a1,45
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/plugins.py", line <filtered>, in importAndCall
>     exec(command, globals(), namespace)
>   File "<string>", line 1, in <module>
> ModuleNotFoundError: No module named 'default'
> 
> During handling of the above exception, another exception occurred:
> 
> Traceback (most recent call last):
>   File "<filtered>/testmodel.py", line <filtered>, in makeConfigObject
>     return plugins.importAndCall(moduleName, "getConfig", self.inputOptions)
>   File "<filtered>/plugins.py", line <filtered>, in importAndCall
>     exec(command, globals(), namespace)
>   File "<string>", line 1, in <module>
>   File "<filtered>/__init__.py", line <filtered>, in <module>
>     import texttestlib.default.batch
>   File "<filtered>/__init__.py", line <filtered>, in <module>
>     from texttestlib.default.batch import testoverview
>   File "<filtered>/testoverview.py", line <filtered>, in <module>
>     from texttestlib.default.batch import HTMLgen, HTMLcolors, jenkinschanges
>   File "<filtered>/HTMLgen.py", line <filtered>, in <module>
>     from .imgsize import imgsize
>   File "<filtered>/imgsize.py", line <filtered>, in <module>
>     from PIL import Image
>   File "<filtered>/Image.py", line <filtered>, in <module>
>     from pathlib import Path
>   File "<filtered>/pathlib.py", line <filtered>, in <module>
>     from urllib.parse import quote_from_bytes as urlquote_from_bytes
<truncated after showing first 30 lines>
---------- Differences in catalogue ----------
1,11c1
< The following new files/directories were created:
< <Test Directory>
< ----texttesttmp
< --------local.<datetime>.<pid>
< ------------batchreport.hello
< ------------hello
< ----------------Test
< --------------------errors.hello
< --------------------output.hello
< --------------------framework_tmp
< ------------------------teststate
---
> No files or directories were created, edited or deleted.
---------- Differences in output ----------
1,14d0
< Using Application HELLO
< Running HELLO test-suite TargetApp
<   Running HELLO test-case Test
< Comparing differences for HELLO test-suite TargetApp
<   Comparing differences for HELLO test-case Test (on errors.hello,output.hello)
<   HELLO test-case Test had known bugs (bug 42 (BAD SCRIPT)) on <machine> : differences in output
< Results:
< 
< Tests that did not succeed:
<   HELLO test-case Test had known bugs (bug 42 (BAD SCRIPT)) on <machine> : differences in output
< 
< Tests Run: 1, Known Bugs: 1
< Creating batch report for application HELLO ...
< File written.
---------- Differences in pythonmocks ----------
2,7d1
< <-PYT:import urllib.request
< <-PYT:import urllib.error
< <-PYT:urllib.request.urlopen('http://bugzilla.myveryoldsite.com/cli.cgi?bug=42')
< ->RET:Instance('addinfourl', 'addinfourl1')
< <-PYT:addinfourl1.read()
< ->RET:'What a load of rubbish!'
---------- New result in exitcode ----------
1
---------- Missing result in targetReport ----------
known bugs=1
<today's date>
HELLO : 1 tests : 1 known bugs

Summary of all Unsuccessful tests follows...
---------------------------------------------------------------------------------
The following tests had known bugs : 
In HELLO test-suite TargetApp:
  - HELLO test-case Test : bug 42 (BAD SCRIPT)


Details of all Unsuccessful tests follows...
---------------------------------------------------------------------------------

Detailed information for the tests that had known bugs follows...
--------------------------------------------------------
TEST had known bugs (bug 42 (BAD SCRIPT)) on : HELLO test-case Test (under Test)
Could not parse reply from Bugzilla's cli.cgi script, maybe incompatible interface (this only works on version 2). Text of reply follows : 
What a load of rubbish!
(This bug was triggered by text found in file 'output' matching 'after [0-9]* seconds')
---------- Differences in output ----------
1c1
< Hello world
---
> Hello after 5 seconds

TEST FAILED on ts-sim-scenario-ba : TestSelf KnownBugs BugzillaPlugin Version2 OpenBug ( Last six runs Sep2019 )

---------- Differences in errors ----------
0a1,45
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/plugins.py", line <filtered>, in importAndCall
>     exec(command, globals(), namespace)
>   File "<string>", line 1, in <module>
> ModuleNotFoundError: No module named 'default'
> 
> During handling of the above exception, another exception occurred:
> 
> Traceback (most recent call last):
>   File "<filtered>/testmodel.py", line <filtered>, in makeConfigObject
>     return plugins.importAndCall(moduleName, "getConfig", self.inputOptions)
>   File "<filtered>/plugins.py", line <filtered>, in importAndCall
>     exec(command, globals(), namespace)
>   File "<string>", line 1, in <module>
>   File "<filtered>/__init__.py", line <filtered>, in <module>
>     import texttestlib.default.batch
>   File "<filtered>/__init__.py", line <filtered>, in <module>
>     from texttestlib.default.batch import testoverview
>   File "<filtered>/testoverview.py", line <filtered>, in <module>
>     from texttestlib.default.batch import HTMLgen, HTMLcolors, jenkinschanges
>   File "<filtered>/HTMLgen.py", line <filtered>, in <module>
>     from .imgsize import imgsize
>   File "<filtered>/imgsize.py", line <filtered>, in <module>
>     from PIL import Image
>   File "<filtered>/Image.py", line <filtered>, in <module>
>     from pathlib import Path
>   File "<filtered>/pathlib.py", line <filtered>, in <module>
>     from urllib.parse import quote_from_bytes as urlquote_from_bytes
<truncated after showing first 30 lines>
---------- Differences in catalogue ----------
1,11c1
< The following new files/directories were created:
< <Test Directory>
< ----texttesttmp
< --------local.<datetime>.<pid>
< ------------batchreport.hello
< ------------hello
< ----------------Test
< --------------------errors.hello
< --------------------output.hello
< --------------------framework_tmp
< ------------------------teststate
---
> No files or directories were created, edited or deleted.
---------- Differences in output ----------
1,14d0
< Using Application HELLO
< Running HELLO test-suite TargetApp
<   Running HELLO test-case Test
< Comparing differences for HELLO test-suite TargetApp
<   Comparing differences for HELLO test-case Test (on errors.hello,output.hello)
<   HELLO test-case Test had known bugs (bug 42 (NEW)) on <machine> : differences in output
< Results:
< 
< Tests that did not succeed:
<   HELLO test-case Test had known bugs (bug 42 (NEW)) on <machine> : differences in output
< 
< Tests Run: 1, Known Bugs: 1
< Creating batch report for application HELLO ...
< File written.
---------- Differences in pythonmocks ----------
2,7d1
< <-PYT:import urllib.request
< <-PYT:import urllib.error
< <-PYT:urllib.request.urlopen('http://bugzilla.myveryoldsite.com/cli.cgi?bug=42')
< ->RET:Instance('addinfourl', 'addinfourl1')
< <-PYT:addinfourl1.read()
< ->RET:'hello:jaeger:4:jaeger:Not greeting the world properly:jaeger:2003-04-14 11:54:32:jaeger:NEW:jaeger:enhancement:jaeger:265:jaeger:\nIt\'s meant to be a hello world program!:jaeger:'
---------- New result in exitcode ----------
1
---------- Missing result in targetReport ----------
known bugs=1
<today's date>
HELLO : 1 tests : 1 known bugs

Summary of all Unsuccessful tests follows...
---------------------------------------------------------------------------------
The following tests had known bugs : 
In HELLO test-suite TargetApp:
  - HELLO test-case Test : bug 42 (NEW)


Details of all Unsuccessful tests follows...
---------------------------------------------------------------------------------

Detailed information for the tests that had known bugs follows...
--------------------------------------------------------
TEST had known bugs (bug 42 (NEW)) on : HELLO test-case Test (under Test)
******************************************************
BugId: 42          Assigned: 265
Severity: enhancement  Status: NEW
Priority: 4     Created: 2003-04-14 11:54:32
Component: hello
Summary: Not greeting the world properly
Description:

It's meant to be a hello world program!
******************************************************
(This bug was triggered by text found in file 'output' matching 'after [0-9]* seconds')
---------- Differences in output ----------
1c1
<truncated after showing first 30 lines>

TEST FAILED on ts-sim-scenario-ba : TestSelf KnownBugs BugzillaPlugin Version2 UnknownBug ( Last six runs Sep2019 )

---------- Differences in errors ----------
0a1,45
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/plugins.py", line <filtered>, in importAndCall
>     exec(command, globals(), namespace)
>   File "<string>", line 1, in <module>
> ModuleNotFoundError: No module named 'default'
> 
> During handling of the above exception, another exception occurred:
> 
> Traceback (most recent call last):
>   File "<filtered>/testmodel.py", line <filtered>, in makeConfigObject
>     return plugins.importAndCall(moduleName, "getConfig", self.inputOptions)
>   File "<filtered>/plugins.py", line <filtered>, in importAndCall
>     exec(command, globals(), namespace)
>   File "<string>", line 1, in <module>
>   File "<filtered>/__init__.py", line <filtered>, in <module>
>     import texttestlib.default.batch
>   File "<filtered>/__init__.py", line <filtered>, in <module>
>     from texttestlib.default.batch import testoverview
>   File "<filtered>/testoverview.py", line <filtered>, in <module>
>     from texttestlib.default.batch import HTMLgen, HTMLcolors, jenkinschanges
>   File "<filtered>/HTMLgen.py", line <filtered>, in <module>
>     from .imgsize import imgsize
>   File "<filtered>/imgsize.py", line <filtered>, in <module>
>     from PIL import Image
>   File "<filtered>/Image.py", line <filtered>, in <module>
>     from pathlib import Path
>   File "<filtered>/pathlib.py", line <filtered>, in <module>
>     from urllib.parse import quote_from_bytes as urlquote_from_bytes
<truncated after showing first 30 lines>
---------- Differences in catalogue ----------
1,11c1
< The following new files/directories were created:
< <Test Directory>
< ----texttesttmp
< --------local.<datetime>.<pid>
< ------------batchreport.hello
< ------------hello
< ----------------Test
< --------------------errors.hello
< --------------------output.hello
< --------------------framework_tmp
< ------------------------teststate
---
> No files or directories were created, edited or deleted.
---------- Differences in output ----------
1,14d0
< Using Application HELLO
< Running HELLO test-suite TargetApp
<   Running HELLO test-case Test
< Comparing differences for HELLO test-suite TargetApp
<   Comparing differences for HELLO test-case Test (on errors.hello,output.hello)
<   HELLO test-case Test had known bugs (bug 42 (NONEXISTENT)) on <machine> : differences in output
< Results:
< 
< Tests that did not succeed:
<   HELLO test-case Test had known bugs (bug 42 (NONEXISTENT)) on <machine> : differences in output
< 
< Tests Run: 1, Known Bugs: 1
< Creating batch report for application HELLO ...
< File written.
---------- Differences in pythonmocks ----------
2,7d1
< <-PYT:import urllib.request
< <-PYT:import urllib.error
< <-PYT:urllib.request.urlopen('http://bugzilla.myveryoldsite.com/cli.cgi?bug=42')
< ->RET:Instance('addinfourl', 'addinfourl1')
< <-PYT:addinfourl1.read()
< ->RET:''
---------- New result in exitcode ----------
1
---------- Missing result in targetReport ----------
known bugs=1
<today's date>
HELLO : 1 tests : 1 known bugs

Summary of all Unsuccessful tests follows...
---------------------------------------------------------------------------------
The following tests had known bugs : 
In HELLO test-suite TargetApp:
  - HELLO test-case Test : bug 42 (NONEXISTENT)


Details of all Unsuccessful tests follows...
---------------------------------------------------------------------------------

Detailed information for the tests that had known bugs follows...
--------------------------------------------------------
TEST had known bugs (bug 42 (NONEXISTENT)) on : HELLO test-case Test (under Test)
Bug 42 could not be found in the Bugzilla version 2 instance at http://bugzilla.myveryoldsite.com.
(This bug was triggered by text found in file 'output' matching 'after [0-9]* seconds')
---------- Differences in output ----------
1c1
< Hello world
---
> Hello after 5 seconds

TEST FAILED on ts-sim-scenario-ba : TestSelf KnownBugs TracPlugin BadScript ( Last six runs Sep2019 )

---------- Differences in errors ----------
0a1,45
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/plugins.py", line <filtered>, in importAndCall
>     exec(command, globals(), namespace)
>   File "<string>", line 1, in <module>
> ModuleNotFoundError: No module named 'default'
> 
> During handling of the above exception, another exception occurred:
> 
> Traceback (most recent call last):
>   File "<filtered>/testmodel.py", line <filtered>, in makeConfigObject
>     return plugins.importAndCall(moduleName, "getConfig", self.inputOptions)
>   File "<filtered>/plugins.py", line <filtered>, in importAndCall
>     exec(command, globals(), namespace)
>   File "<string>", line 1, in <module>
>   File "<filtered>/__init__.py", line <filtered>, in <module>
>     import texttestlib.default.batch
>   File "<filtered>/__init__.py", line <filtered>, in <module>
>     from texttestlib.default.batch import testoverview
>   File "<filtered>/testoverview.py", line <filtered>, in <module>
>     from texttestlib.default.batch import HTMLgen, HTMLcolors, jenkinschanges
>   File "<filtered>/HTMLgen.py", line <filtered>, in <module>
>     from .imgsize import imgsize
>   File "<filtered>/imgsize.py", line <filtered>, in <module>
>     from PIL import Image
>   File "<filtered>/Image.py", line <filtered>, in <module>
>     from pathlib import Path
>   File "<filtered>/pathlib.py", line <filtered>, in <module>
>     from urllib.parse import quote_from_bytes as urlquote_from_bytes
<truncated after showing first 30 lines>
---------- Differences in catalogue ----------
1,11c1
< The following new files/directories were created:
< <Test Directory>
< ----texttesttmp
< --------local.<datetime>.<pid>
< ------------batchreport.hello
< ------------hello
< ----------------Test
< --------------------errors.hello
< --------------------output.hello
< --------------------framework_tmp
< ------------------------teststate
---
> No files or directories were created, edited or deleted.
---------- Differences in output ----------
1,14d0
< Using Application HELLO
< Running HELLO test-suite TargetApp
<   Running HELLO test-case Test
< Comparing differences for HELLO test-suite TargetApp
<   Comparing differences for HELLO test-case Test (on errors.hello,output.hello)
<   HELLO test-case Test had known bugs (bug 42 (BAD SCRIPT)) on <machine> : differences in output
< Results:
< 
< Tests that did not succeed:
<   HELLO test-case Test had known bugs (bug 42 (BAD SCRIPT)) on <machine> : differences in output
< 
< Tests Run: 1, Known Bugs: 1
< Creating batch report for application HELLO ...
< File written.
---------- Differences in pythonmocks ----------
2,8d1
< <-PYT:import urllib.request
< <-PYT:import urllib.error
< <-PYT:urllib.request.urlopen('http://trac.edgewall.org/demo-0.13/ticket/42?format=tab')
< ->RET:Instance('addinfourl(addbase)', 'addinfourl1')
< <-PYT:addinfourl1.readlines()
< ->RET:['''\xef\xbb\xbfid\tsummary\treporter\towner\tdescription\ttype\tstatus\tpriority\tmilestone\tcomponent\tversion\tresolution\tkeywords\tcc\tchangelog\tapichanges\r
< ''']
---------- New result in exitcode ----------
1
---------- Missing result in targetReport ----------
known bugs=1
<today's date>
HELLO : 1 tests : 1 known bugs

Summary of all Unsuccessful tests follows...
---------------------------------------------------------------------------------
The following tests had known bugs : 
In HELLO test-suite TargetApp:
  - HELLO test-case Test : bug 42 (BAD SCRIPT)


Details of all Unsuccessful tests follows...
---------------------------------------------------------------------------------

Detailed information for the tests that had known bugs follows...
--------------------------------------------------------
TEST had known bugs (bug 42 (BAD SCRIPT)) on : HELLO test-case Test (under Test)
Could not parse reply from trac, maybe incompatible interface.
(This bug was triggered by text found in file 'output' matching 'after [0-9]* seconds')
---------- Differences in output ----------
1c1
< Hello world
---
> Hello after 5 seconds

TEST FAILED on ts-sim-scenario-ba : TestSelf KnownBugs TracPlugin ClosedBug ( Last six runs Sep2019 )

---------- Differences in errors ----------
0a1,45
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/plugins.py", line <filtered>, in importAndCall
>     exec(command, globals(), namespace)
>   File "<string>", line 1, in <module>
> ModuleNotFoundError: No module named 'default'
> 
> During handling of the above exception, another exception occurred:
> 
> Traceback (most recent call last):
>   File "<filtered>/testmodel.py", line <filtered>, in makeConfigObject
>     return plugins.importAndCall(moduleName, "getConfig", self.inputOptions)
>   File "<filtered>/plugins.py", line <filtered>, in importAndCall
>     exec(command, globals(), namespace)
>   File "<string>", line 1, in <module>
>   File "<filtered>/__init__.py", line <filtered>, in <module>
>     import texttestlib.default.batch
>   File "<filtered>/__init__.py", line <filtered>, in <module>
>     from texttestlib.default.batch import testoverview
>   File "<filtered>/testoverview.py", line <filtered>, in <module>
>     from texttestlib.default.batch import HTMLgen, HTMLcolors, jenkinschanges
>   File "<filtered>/HTMLgen.py", line <filtered>, in <module>
>     from .imgsize import imgsize
>   File "<filtered>/imgsize.py", line <filtered>, in <module>
>     from PIL import Image
>   File "<filtered>/Image.py", line <filtered>, in <module>
>     from pathlib import Path
>   File "<filtered>/pathlib.py", line <filtered>, in <module>
>     from urllib.parse import quote_from_bytes as urlquote_from_bytes
<truncated after showing first 30 lines>
---------- Differences in catalogue ----------
1,11c1
< The following new files/directories were created:
< <Test Directory>
< ----texttesttmp
< --------local.<datetime>.<pid>
< ------------batchreport.hello
< ------------hello
< ----------------Test
< --------------------errors.hello
< --------------------output.hello
< --------------------framework_tmp
< ------------------------teststate
---
> No files or directories were created, edited or deleted.
---------- Differences in output ----------
1,14d0
< Using Application HELLO
< Running HELLO test-suite TargetApp
<   Running HELLO test-case Test
< Comparing differences for HELLO test-suite TargetApp
<   Comparing differences for HELLO test-case Test (on errors.hello,output.hello)
<   HELLO test-case Test had internal errors (bug 42 (closed)) on <machine> : differences in output
< Results:
< 
< Tests that did not succeed:
<   HELLO test-case Test had internal errors (bug 42 (closed)) on <machine> : differences in output
< 
< Tests Run: 1, Failures: 1
< Creating batch report for application HELLO ...
< File written.
---------- Differences in pythonmocks ----------
2,9d1
< <-PYT:import urllib.request
< <-PYT:import urllib.error
< <-PYT:urllib.request.urlopen('http://trac.edgewall.org/demo-0.13/ticket/42?format=tab')
< ->RET:Instance('addinfourl(addbase)', 'addinfourl1')
< <-PYT:addinfourl1.readlines()
< ->RET:['''\xef\xbb\xbfid\tsummary\treporter\towner\tdescription\ttype\tstatus\tpriority\tmilestone\tcomponent\tversion\tresolution\tkeywords\tcc\tchangelog\tapichanges\r
< ''', '''33\tbp test task\tanonymous\tsomebody\t\tdefect\tclosed\tmajor\tmilestone4\tcomponent1\t\tfixed\t\t\t\t\r
< ''']
---------- Missing result in targetReport ----------
internal errors=1
<today's date>
HELLO : 1 tests : 1 internal errors

Summary of all Unsuccessful tests follows...
---------------------------------------------------------------------------------
The following tests had internal errors : 
In HELLO test-suite TargetApp:
  - HELLO test-case Test : bug 42 (closed)


Details of all Unsuccessful tests follows...
---------------------------------------------------------------------------------

Detailed information for the tests that had internal errors follows...
--------------------------------------------------------
TEST had internal errors (bug 42 (closed)) on : HELLO test-case Test (under Test)
******************************************************
Ticket #42 (closed defect: fixed)
bp test task
http://trac.edgewall.org/demo-0.13/ticket/42
Reported By: anonymous Owned by: somebody
Priority: major Milestone: milestone4
Component: component1 Version: 
Description:

******************************************************
(This bug was triggered by text found in file 'output' matching 'after [0-9]* seconds')
---------- Differences in output ----------
1c1
<truncated after showing first 30 lines>

TEST FAILED on ts-sim-scenario-ba : TestSelf KnownBugs TracPlugin OpenBug ( Last six runs Sep2019 )

---------- Differences in errors ----------
0a1,45
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/plugins.py", line <filtered>, in importAndCall
>     exec(command, globals(), namespace)
>   File "<string>", line 1, in <module>
> ModuleNotFoundError: No module named 'default'
> 
> During handling of the above exception, another exception occurred:
> 
> Traceback (most recent call last):
>   File "<filtered>/testmodel.py", line <filtered>, in makeConfigObject
>     return plugins.importAndCall(moduleName, "getConfig", self.inputOptions)
>   File "<filtered>/plugins.py", line <filtered>, in importAndCall
>     exec(command, globals(), namespace)
>   File "<string>", line 1, in <module>
>   File "<filtered>/__init__.py", line <filtered>, in <module>
>     import texttestlib.default.batch
>   File "<filtered>/__init__.py", line <filtered>, in <module>
>     from texttestlib.default.batch import testoverview
>   File "<filtered>/testoverview.py", line <filtered>, in <module>
>     from texttestlib.default.batch import HTMLgen, HTMLcolors, jenkinschanges
>   File "<filtered>/HTMLgen.py", line <filtered>, in <module>
>     from .imgsize import imgsize
>   File "<filtered>/imgsize.py", line <filtered>, in <module>
>     from PIL import Image
>   File "<filtered>/Image.py", line <filtered>, in <module>
>     from pathlib import Path
>   File "<filtered>/pathlib.py", line <filtered>, in <module>
>     from urllib.parse import quote_from_bytes as urlquote_from_bytes
<truncated after showing first 30 lines>
---------- Differences in catalogue ----------
1,11c1
< The following new files/directories were created:
< <Test Directory>
< ----texttesttmp
< --------local.<datetime>.<pid>
< ------------batchreport.hello
< ------------hello
< ----------------Test
< --------------------errors.hello
< --------------------output.hello
< --------------------framework_tmp
< ------------------------teststate
---
> No files or directories were created, edited or deleted.
---------- Differences in output ----------
1,14d0
< Using Application HELLO
< Running HELLO test-suite TargetApp
<   Running HELLO test-case Test
< Comparing differences for HELLO test-suite TargetApp
<   Comparing differences for HELLO test-case Test (on errors.hello,output.hello)
<   HELLO test-case Test had known bugs (bug 42 (new)) on <machine> : differences in output
< Results:
< 
< Tests that did not succeed:
<   HELLO test-case Test had known bugs (bug 42 (new)) on <machine> : differences in output
< 
< Tests Run: 1, Known Bugs: 1
< Creating batch report for application HELLO ...
< File written.
---------- Differences in pythonmocks ----------
2,9d1
< <-PYT:import urllib.request
< <-PYT:import urllib.error
< <-PYT:urllib.request.urlopen('http://trac.edgewall.org/demo-0.13/ticket/42?format=tab')
< ->RET:Instance('addinfourl(addbase)', 'addinfourl1')
< <-PYT:addinfourl1.readlines()
< ->RET:['''\xef\xbb\xbfid\tsummary\treporter\towner\tdescription\ttype\tstatus\tpriority\tmilestone\tcomponent\tversion\tresolution\tkeywords\tcc\tchangelog\tapichanges\r
< ''', '''65\tTestticket\tanonymous\tjoe\tlorem ipsum asd ger  ita solum isse\tdefect\tnew\tcritical\tmilestone1\tcomponent2\t1.0\t\t\t\t\t\r
< ''']
---------- New result in exitcode ----------
1
---------- Missing result in targetReport ----------
known bugs=1
<today's date>
HELLO : 1 tests : 1 known bugs

Summary of all Unsuccessful tests follows...
---------------------------------------------------------------------------------
The following tests had known bugs : 
In HELLO test-suite TargetApp:
  - HELLO test-case Test : bug 42 (new)


Details of all Unsuccessful tests follows...
---------------------------------------------------------------------------------

Detailed information for the tests that had known bugs follows...
--------------------------------------------------------
TEST had known bugs (bug 42 (new)) on : HELLO test-case Test (under Test)
******************************************************
Ticket #42 (new defect: )
Testticket
http://trac.edgewall.org/demo-0.13/ticket/42
Reported By: anonymous Owned by: joe
Priority: critical Milestone: milestone1
Component: component2 Version: 1.0
Description:
lorem ipsum asd ger  ita solum isse
******************************************************
(This bug was triggered by text found in file 'output' matching 'after [0-9]* seconds')
---------- Differences in output ----------
1c1
<truncated after showing first 30 lines>

TEST FAILED on ts-sim-scenario-ba : TestSelf KnownBugs TracPlugin UnknownBug ( Last six runs Sep2019 )

---------- Differences in errors ----------
0a1,45
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/plugins.py", line <filtered>, in importAndCall
>     exec(command, globals(), namespace)
>   File "<string>", line 1, in <module>
> ModuleNotFoundError: No module named 'default'
> 
> During handling of the above exception, another exception occurred:
> 
> Traceback (most recent call last):
>   File "<filtered>/testmodel.py", line <filtered>, in makeConfigObject
>     return plugins.importAndCall(moduleName, "getConfig", self.inputOptions)
>   File "<filtered>/plugins.py", line <filtered>, in importAndCall
>     exec(command, globals(), namespace)
>   File "<string>", line 1, in <module>
>   File "<filtered>/__init__.py", line <filtered>, in <module>
>     import texttestlib.default.batch
>   File "<filtered>/__init__.py", line <filtered>, in <module>
>     from texttestlib.default.batch import testoverview
>   File "<filtered>/testoverview.py", line <filtered>, in <module>
>     from texttestlib.default.batch import HTMLgen, HTMLcolors, jenkinschanges
>   File "<filtered>/HTMLgen.py", line <filtered>, in <module>
>     from .imgsize import imgsize
>   File "<filtered>/imgsize.py", line <filtered>, in <module>
>     from PIL import Image
>   File "<filtered>/Image.py", line <filtered>, in <module>
>     from pathlib import Path
>   File "<filtered>/pathlib.py", line <filtered>, in <module>
>     from urllib.parse import quote_from_bytes as urlquote_from_bytes
<truncated after showing first 30 lines>
---------- Differences in catalogue ----------
1,11c1
< The following new files/directories were created:
< <Test Directory>
< ----texttesttmp
< --------local.<datetime>.<pid>
< ------------batchreport.hello
< ------------hello
< ----------------Test
< --------------------errors.hello
< --------------------output.hello
< --------------------framework_tmp
< ------------------------teststate
---
> No files or directories were created, edited or deleted.
---------- Differences in output ----------
1,14d0
< Using Application HELLO
< Running HELLO test-suite TargetApp
<   Running HELLO test-case Test
< Comparing differences for HELLO test-suite TargetApp
<   Comparing differences for HELLO test-case Test (on errors.hello,output.hello)
<   HELLO test-case Test had known bugs (bug 42 (NONEXISTENT)) on <machine> : differences in output
< Results:
< 
< Tests that did not succeed:
<   HELLO test-case Test had known bugs (bug 42 (NONEXISTENT)) on <machine> : differences in output
< 
< Tests Run: 1, Known Bugs: 1
< Creating batch report for application HELLO ...
< File written.
---------- Differences in pythonmocks ----------
2,7d1
< <-PYT:import urllib.request
< <-PYT:import urllib.error
< <-PYT:urllib.request.urlopen('http://trac.edgewall.org/demo-0.13/ticket/42?format=tab')
< ->RET:raise Instance('HTTPError(URLError, IOError)', 'httperror1')
< <-PYT:httperror1.__str__()
< ->RET:'HTTP Error 404: Not Found'
---------- New result in exitcode ----------
1
---------- Missing result in targetReport ----------
known bugs=1
<today's date>
HELLO : 1 tests : 1 known bugs

Summary of all Unsuccessful tests follows...
---------------------------------------------------------------------------------
The following tests had known bugs : 
In HELLO test-suite TargetApp:
  - HELLO test-case Test : bug 42 (NONEXISTENT)


Details of all Unsuccessful tests follows...
---------------------------------------------------------------------------------

Detailed information for the tests that had known bugs follows...
--------------------------------------------------------
TEST had known bugs (bug 42 (NONEXISTENT)) on : HELLO test-case Test (under Test)
Failed to open URL 'http://trac.edgewall.org/demo-0.13/ticket/42?format=tab': HTTP Error 404: Not Found.

Please make sure that bug 42 exists
and that the configuration entry 'bug_system_location' points to the correct trac instance.
The current value is 'http://trac.edgewall.org/demo-0.13/'.
(This bug was triggered by text found in file 'output' matching 'after [0-9]* seconds')
---------- Differences in output ----------
1c1
< Hello world
---
> Hello after 5 seconds

Detailed information for the tests that were terminated before completion:

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems ErrorHandling Console BadPerformanceMachine ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.badperfmachine.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,12d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Basic-SLEEP.badperfmachine -l hostname=no_such_machine|no_such_machine2 -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep.badperfmachine -l -tp Basic -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->ERR:Unable to run job: can't resolve hostname "no_such_machine|no_such_machine2".
< Exiting.
< ->EXC:1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Basic2-SLEEP.badperfmachine -l hostname=no_such_machine|no_such_machine2 -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep.badperfmachine -l -tp Basic2 -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->ERR:Unable to run job: can't resolve hostname "no_such_machine|no_such_machine2".
< Exiting.
< ->EXC:1
< -- Unordered text as found by filter 'Completed submission of all tests' --
< <-SRV:Completed submission of all tests
< 
---------- Differences in output ----------
3,11c3
< S: SLEEP.badperfmachine test-case Basic not compared:
< Failed to submit to SGE (Unable to run job: can't resolve hostname "no_such_machine|no_such_machine2".)
< Submission command was 'qsub -N Test-Basic-SLEEP.badperfmachine -l hostname=no_such_machine|no_such_machine2 -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test write dir>/texttesttmp/grid_core_files/slave_start_errors.<user> ... '
< 
< Q: Submitting SLEEP.badperfmachine test-case Basic2 to default SGE queue
< S: SLEEP.badperfmachine test-case Basic2 not compared:
< Failed to submit to SGE (Unable to run job: can't resolve hostname "no_such_machine|no_such_machine2".)
< Submission command was 'qsub -N Test-Basic2-SLEEP.badperfmachine -l hostname=no_such_machine|no_such_machine2 -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test write dir>/texttesttmp/grid_core_files/slave_start_errors.<user> ... '
< 
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems ErrorHandling Console BadPerformanceModel ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.badperfarch.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,12d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Basic-SLEEP.badperfarch -l arch=no_such_arch -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep.badperfarch -l -tp Basic -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->ERR:Unable to run job: error: no suitable queues.
< Exiting.
< ->EXC:1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Basic2-SLEEP.badperfarch -l arch=no_such_arch -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep.badperfarch -l -tp Basic2 -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->ERR:Unable to run job: error: no suitable queues.
< Exiting.
< ->EXC:1
< -- Unordered text as found by filter 'Completed submission of all tests' --
< <-SRV:Completed submission of all tests
< 
---------- Differences in output ----------
3,11c3
< S: SLEEP.badperfarch test-case Basic not compared:
< Failed to submit to SGE (Unable to run job: error: no suitable queues.)
< Submission command was 'qsub -N Test-Basic-SLEEP.badperfarch -l arch=no_such_arch -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test write dir>/texttesttmp/grid_core_files/slave_start_errors.<user> ... '
< 
< Q: Submitting SLEEP.badperfarch test-case Basic2 to default SGE queue
< S: SLEEP.badperfarch test-case Basic2 not compared:
< Failed to submit to SGE (Unable to run job: error: no suitable queues.)
< Submission command was 'qsub -N Test-Basic2-SLEEP.badperfarch -l arch=no_such_arch -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test write dir>/texttesttmp/grid_core_files/slave_start_errors.<user> ... '
< 
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems ErrorHandling Console BadQueue ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.badqueue.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,13d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Basic-SLEEP.badqueue -q non_existent -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep.badqueue -l -tp Basic -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->ERR:
< Unable to run job: Job was rejected because job requests unknown queue "non_existent".
< Exiting.
< ->EXC:1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Basic2-SLEEP.badqueue -q non_existent -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep.badqueue -l -tp Basic2 -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->ERR:Unable to run job: Job was rejected because job requests unknown queue "non_existent".
< Exiting.
< ->EXC:1
< -- Unordered text as found by filter 'Completed submission of all tests' --
< <-SRV:Completed submission of all tests
< 
---------- Differences in output ----------
3,11c3
< S: SLEEP.badqueue test-case Basic not compared:
< Failed to submit to SGE (Unable to run job: Job was rejected because job requests unknown queue "non_existent".)
< Submission command was 'qsub -N Test-Basic-SLEEP.badqueue -q non_existent -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test write dir>/texttesttmp/grid_core_files/slave_start_errors.<user> ... '
< 
< Q: Submitting SLEEP.badqueue test-case Basic2 to SGE queue non_existent
< S: SLEEP.badqueue test-case Basic2 not compared:
< Failed to submit to SGE (Unable to run job: Job was rejected because job requests unknown queue "non_existent".)
< Submission command was 'qsub -N Test-Basic2-SLEEP.badqueue -q non_existent -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test write dir>/texttesttmp/grid_core_files/slave_start_errors.<user> ... '
< 
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems ErrorHandling Console BadQueueCommandLine ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,12d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Basic-SLEEP -q non_existent -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep -l -tp Basic -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->ERR:Unable to run job: Job was rejected because job requests unknown queue "non_existent".
< Exiting.
< ->EXC:1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Basic2-SLEEP -q non_existent -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep -l -tp Basic2 -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->ERR:Unable to run job: Job was rejected because job requests unknown queue "non_existent".
< Exiting.
< ->EXC:1
< -- Unordered text as found by filter 'Completed submission of all tests' --
< <-SRV:Completed submission of all tests
< 
---------- Differences in output ----------
3,11c3
< S: SLEEP test-case Basic not compared:
< Failed to submit to SGE (Unable to run job: Job was rejected because job requests unknown queue "non_existent".)
< Submission command was 'qsub -N Test-Basic-SLEEP -q non_existent -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test write dir>/texttesttmp/grid_core_files/slave_start_errors.<user> ... '
< 
< Q: Submitting SLEEP test-case Basic2 to SGE queue non_existent
< S: SLEEP test-case Basic2 not compared:
< Failed to submit to SGE (Unable to run job: Job was rejected because job requests unknown queue "non_existent".)
< Submission command was 'qsub -N Test-Basic2-SLEEP -q non_existent -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test write dir>/texttesttmp/grid_core_files/slave_start_errors.<user> ... '
< 
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems ErrorHandling Console BadQueueKnownBug ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
6d5
< ------------batchreport.sleep.kb
8,14d6
< ------------sleep.kb
< ----------------Basic
< --------------------framework_tmp
< ------------------------teststate
< ----------------Basic2
< --------------------framework_tmp
< ------------------------teststate
---------- Differences in externalmocks ----------
2,12d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Basic-SLEEP.kb -q non_existent -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep.kb -l -tp Basic -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port> -b local""
< ->ERR:Unable to run job: Job was rejected because job requests unknown queue "non_existent".
< Exiting.
< ->EXC:1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Basic2-SLEEP.kb -q non_existent -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep.kb -l -tp Basic2 -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port> -b local""
< ->ERR:Unable to run job: Job was rejected because job requests unknown queue "non_existent".
< Exiting.
< ->EXC:1
< -- Unordered text as found by filter 'Completed submission of all tests' --
< <-SRV:Completed submission of all tests
< 
---------- Differences in output ----------
3,18c3
< S: SLEEP.kb test-case Basic not compared:
< Test was trying to use a non-existent queue
< (This bug was triggered by text found in the full difference report matching 'non_existent')
< Q: Submitting SLEEP.kb test-case Basic2 to SGE queue non_existent
< S: SLEEP.kb test-case Basic2 not compared:
< Test was trying to use a non-existent queue
< (This bug was triggered by text found in the full difference report matching 'non_existent')
< Results:
< 
< Tests that did not succeed:
<   SLEEP.kb test-case Basic not compared: Test was trying to use a non-existent queue
<   SLEEP.kb test-case Basic2 not compared: Test was trying to use a non-existent queue
< 
< Tests Run: 2, Failures: 2
< Creating batch report for application SLEEP ...
< File written.
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1
---------- Missing result in targetReport ----------
internal errors=2
<today's date>
SLEEP kb : 2 tests : 2 internal errors

Summary of all Unsuccessful tests follows...
---------------------------------------------------------------------------------
The following tests had internal errors : 
In SLEEP.kb test-suite TargetApp:
  - SLEEP.kb test-case Basic  : queue didn't exist
  - SLEEP.kb test-case Basic2 : queue didn't exist


Details of all Unsuccessful tests follows...
---------------------------------------------------------------------------------

Detailed information for the tests that had internal errors follows...
--------------------------------------------------------
2 TESTS had internal errors :
Test was trying to use a non-existent queue
(This bug was triggered by text found in the full difference report matching 'non_existent')

-- SLEEP.kb test-case Basic (under Basic)
-- SLEEP.kb test-case Basic2 (under Basic2)

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems ErrorHandling Console BadResource ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.badresource.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,12d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Basic-SLEEP.badresource -l no_such_resource -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep.badresource -l -tp Basic -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->ERR:Unable to run job: unknown resource "no_such_resource".
< Exiting.
< ->EXC:1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Basic2-SLEEP.badresource -l no_such_resource -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep.badresource -l -tp Basic2 -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->ERR:Unable to run job: unknown resource "no_such_resource".
< Exiting.
< ->EXC:1
< -- Unordered text as found by filter 'Completed submission of all tests' --
< <-SRV:Completed submission of all tests
< 
---------- Differences in output ----------
3,11c3
< S: SLEEP.badresource test-case Basic not compared:
< Failed to submit to SGE (Unable to run job: unknown resource "no_such_resource".)
< Submission command was 'qsub -N Test-Basic-SLEEP.badresource -l no_such_resource -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test write dir>/texttesttmp/grid_core_files/slave_start_errors.<user> ... '
< 
< Q: Submitting SLEEP.badresource test-case Basic2 to default SGE queue
< S: SLEEP.badresource test-case Basic2 not compared:
< Failed to submit to SGE (Unable to run job: unknown resource "no_such_resource".)
< Submission command was 'qsub -N Test-Basic2-SLEEP.badresource -l no_such_resource -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test write dir>/texttesttmp/grid_core_files/slave_start_errors.<user> ... '
< 
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems ErrorHandling Console BadResourceCommandLine ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,12d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Basic-SLEEP -l no_such_resource -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep -l -tp Basic -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->ERR:Unable to run job: unknown resource "no_such_resource".
< Exiting.
< ->EXC:1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Basic2-SLEEP -l no_such_resource -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep -l -tp Basic2 -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->ERR:Unable to run job: unknown resource "no_such_resource".
< Exiting.
< ->EXC:1
< -- Unordered text as found by filter 'Completed submission of all tests' --
< <-SRV:Completed submission of all tests
< 
---------- Differences in output ----------
3,11c3
< S: SLEEP test-case Basic not compared:
< Failed to submit to SGE (Unable to run job: unknown resource "no_such_resource".)
< Submission command was 'qsub -N Test-Basic-SLEEP -l no_such_resource -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test write dir>/texttesttmp/grid_core_files/slave_start_errors.<user> ... '
< 
< Q: Submitting SLEEP test-case Basic2 to default SGE queue
< S: SLEEP test-case Basic2 not compared:
< Failed to submit to SGE (Unable to run job: unknown resource "no_such_resource".)
< Submission command was 'qsub -N Test-Basic2-SLEEP -l no_such_resource -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test write dir>/texttesttmp/grid_core_files/slave_start_errors.<user> ... '
< 
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems ErrorHandling Console NotInstalled ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,4d1
< -- Unordered text as found by filter 'Completed submission of all tests' --
< <-SRV:Completed submission of all tests
< 
---------- Differences in output ----------
3,11c3
< S: SLEEP test-case Basic not compared:
< Failed to submit to SGE (local machine is not a submit host: running 'qsub' failed.)
< Submission command was 'qsub -N Test-Basic-SLEEP -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test write dir>/texttesttmp/grid_core_files/slave_start_errors.<user> ... '
< 
< Q: Submitting SLEEP test-case Basic2 to default SGE queue
< S: SLEEP test-case Basic2 not compared:
< Failed to submit to SGE (local machine is not a submit host: running 'qsub' failed.)
< Submission command was 'qsub -N Test-Basic2-SLEEP -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test write dir>/texttesttmp/grid_core_files/slave_start_errors.<user> ... '
< 
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1
---------- Missing result in pythonmocks ----------
<-PYT:subprocess.Popen(['qsub', '-N', 'Test-Basic-SLEEP', '-w', 'e', '-notify', '-m', 'n', '-cwd', '-b', 'y', '-v', 'STANDARD', '-o', '/dev/null', '-e', '<test write dir>/texttesttmp/grid_core_files/slave_start_errors.<user> 'exec', ('$SHELL -c "exec <path_to_texttest> -d '
 '<test write dir>/TargetApp '
 '-a sleep -l -tp Basic -slave '
 '<test write dir>/texttesttmp/<target tmp dir> '
 '-servaddr <host:port>"')], cwd='<test write dir>/texttesttmp/grid_core_files', encoding=<something>, env=<something>
<something>,=None, stderr=-1, stdout=-1)
->RET:raise OSError('[Errno 2] No such file or directory')
<-PYT:subprocess.Popen(['qsub', '-N', 'Test-Basic2-SLEEP', '-w', 'e', '-notify', '-m', 'n', '-cwd', '-b', 'y', '-v', 'STANDARD', '-o', '/dev/null', '-e', '<test write dir>/texttesttmp/grid_core_files/slave_start_errors.<user> 'exec', ('$SHELL -c "exec <path_to_texttest> -d '
 '<test write dir>/TargetApp '
 '-a sleep -l -tp Basic2 -slave '
 '<test write dir>/texttesttmp/<target tmp dir> '
 '-servaddr <host:port>"')], cwd='<test write dir>/texttesttmp/grid_core_files', encoding=<something>, env=<something>
<something>,=None, stderr=-1, stdout=-1)
->RET:raise OSError('[Errno 2] No such file or directory')

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems ErrorHandling Console OtherAppDefaultConfig LocalAndGrid ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
1a2,20
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,199d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Test-HELLO -w e -notify -m n -cwd -b y -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a hello -l -tp Test -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->OUT:Your job 15783 ("Test-Test-HELLO") has been submitted
< ->CLI:6351
< hello:Test
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on watsonville'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (watsonville)'
< p10
< p13
< asS'observers'
< p14
<truncated after showing first 30 lines>
---------- Differences in output ----------
3,4c3
< S: HELLO test-case Test FAILED on <machine> : differences in output
< View details(v), Approve(a) or continue(any other key)?
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems ErrorHandling Console PollJobLost ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
6d5
< ------------batchreport.sleep.killing
8,11d6
< ------------sleep.killing
< ----------------Basic
< --------------------framework_tmp
< ------------------------teststate
---------- Differences in externalmocks ----------
2,45d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Basic-SLEEP.killing -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep.killing -l -tp Basic -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port> -b short_local""
< ->OUT:Your job 1 (stupid) has been submitted.
< ->CLI:6035
< sleep.killing:Basic
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on abbeville'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (abbeville)'
< p10
< p13
< asS'observers'
< p14
<truncated after showing first 30 lines>
---------- Differences in output ----------
3,16c3
< S: SLEEP.killing test-case Basic not compared:
< no report, possibly killed with SIGKILL
< Job ID was 1
< ---------- Full accounting info from SGE ----------
< Some random accounting info
< 
< Results:
< 
< Tests that did not succeed:
<   SLEEP.killing test-case Basic not compared: no report, possibly killed with SIGKILL
< 
< Tests Run: 1, Incomplete: 1
< Creating batch report for application SLEEP ...
< File written.
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1
---------- Missing result in pythonmocks ----------
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
---------- Missing result in targetReport ----------
killed=1
<today's date>
SLEEP killing : 1 tests : 1 killed

Summary of all Unsuccessful tests follows...
---------------------------------------------------------------------------------
The following tests were terminated before completion : 
In SLEEP.killing test-suite TargetApp:
  - SLEEP.killing test-case Basic : no report, possibly killed with SIGKILL


Details of all Unsuccessful tests follows...
---------------------------------------------------------------------------------

Detailed information for the tests that were terminated before completion follows...
--------------------------------------------------------
TEST were terminated before completion : SLEEP.killing test-case Basic (under Basic)
no report, possibly killed with SIGKILL
Job ID was 1
---------- Full accounting info from SGE ----------
Some random accounting info

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems ErrorHandling Console PollSgeCrash ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
1,3c1,19
< WARNING: error produced by slave job 'Test-Basic-SLEEP.killing'
< Some Beautiful Error Message...
< With Extra Nonsense
---
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
6d5
< ------------batchreport.sleep.killing
8,12d6
< ----------------Test-Basic-SLEEP.killing.errors
< ------------sleep.killing
< ----------------Basic
< --------------------framework_tmp
< ------------------------teststate
---------- Differences in externalmocks ----------
2,38d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Basic-SLEEP.killing -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep.killing -l -tp Basic -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port> -b short_local""
< ->FIL:<target tmp dir>
< ->OUT:Your job 1 (stupid) has been submitted.
< <-CMD:qstat
< <-CMD:qacct -j 1
< ->ERR:SGE has died and gone to heaven
< error: job id 1 not found
< <-CMD:qacct -j 1
< ->ERR:SGE has died and gone to heaven
< error: job id 1 not found
< <-CMD:qacct -j 1
< ->ERR:SGE has died and gone to heaven
< error: job id 1 not found
< <-CMD:qacct -j 1
< ->ERR:SGE has died and gone to heaven
< error: job id 1 not found
< <-CMD:qacct -j 1
< ->ERR:SGE has died and gone to heaven
< error: job id 1 not found
< <-CMD:qacct -j 1
< ->ERR:SGE has died and gone to heaven
< error: job id 1 not found
< <-CMD:qacct -j 1
< ->ERR:SGE has died and gone to heaven
< error: job id 1 not found
< <-CMD:qacct -j 1
< ->ERR:SGE has died and gone to heaven
< error: job id 1 not found
< <-CMD:qacct -j 1
<truncated after showing first 30 lines>
---------- Differences in output ----------
3,31c3
< Waiting 1.0 seconds before retrying account info for job 1
< Waiting 2.0 seconds before retrying account info for job 1
< Waiting 4.0 seconds before retrying account info for job 1
< Waiting 8.0 seconds before retrying account info for job 1
< Waiting 8.0 seconds before retrying account info for job 1
< Waiting 8.0 seconds before retrying account info for job 1
< Waiting 8.0 seconds before retrying account info for job 1
< Waiting 8.0 seconds before retrying account info for job 1
< Waiting 8.0 seconds before retrying account info for job 1
< S: SLEEP.killing test-case Basic not compared:
< SGE job exited
< Job ID was 1
< ---------- Error messages written by SGE job ----------
< Some Beautiful Error Message...
< With Extra Nonsense
< ---------- Full accounting info from SGE ----------
< Could not find info about job: 1
< qacct error was as follows:
< SGE has died and gone to heaven
< error: job id 1 not found
< 
< Results:
< 
< Tests that did not succeed:
<   SLEEP.killing test-case Basic not compared: SGE job exited
< 
< Tests Run: 1, Incomplete: 1
< Creating batch report for application SLEEP ...
< File written.
<truncated after showing first 30 lines>
---------- Missing result in exitcode ----------
1
---------- Missing result in pythonmocks ----------
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(1.0)
<-PYT:time.sleep(2.0)
<-PYT:time.sleep(4.0)
<-PYT:time.sleep(8.0)
<-PYT:time.sleep(8.0)
<-PYT:time.sleep(8.0)
<-PYT:time.sleep(8.0)
<-PYT:time.sleep(8.0)
<-PYT:time.sleep(0.5)
---------- Missing result in targetReport ----------
unrunnable=1
<today's date>
SLEEP killing : 1 tests : 1 unrunnable

Summary of all Unsuccessful tests follows...
---------------------------------------------------------------------------------
The following tests could not be run : 
In SLEEP.killing test-suite TargetApp:
  - SLEEP.killing test-case Basic : SGE job exited


Details of all Unsuccessful tests follows...
---------------------------------------------------------------------------------

Detailed information for the tests that could not be run follows...
--------------------------------------------------------
TEST could not be run : SLEEP.killing test-case Basic (under Basic)
SGE job exited
Job ID was 1
---------- Error messages written by SGE job ----------
Some Beautiful Error Message...
With Extra Nonsense
---------- Full accounting info from SGE ----------
Could not find info about job: 1
qacct error was as follows:
SGE has died and gone to heaven
error: job id 1 not found

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems ErrorHandling Console PollSgeErrorState ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
6d5
< ------------batchreport.sleep.killing
8,11d6
< ------------sleep.killing
< ----------------Basic
< --------------------framework_tmp
< ------------------------teststate
---------- Differences in externalmocks ----------
2,42d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Basic-SLEEP.killing -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep.killing -l -tp Basic -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port> -b short_local""
< ->OUT:Your job 1 (stupid) has been submitted.
< <-CMD:qstat
< ->OUT:job-ID  prior   name       user         state submit/start at     queue                          slots ja-task-ID 
< -----------------------------------------------------------------------------------------------------------------
<  1 1.35849 Test-Basic geoff       Eqw     05/17/2010 11:05:45                                    1
< <-CMD:qstat -j 1
< ->OUT:==============================================================
< job_number:                 1
< exec_file:                  job_scripts/1
< submission_time:            Sun Nov 18 18:05:53 2012
< owner:                      nightjob
< uid:                        335
< group:                      build
< gid:                        201
< sge_o_home:                 /users/nightjob
< sge_o_log_name:             nightjob
< sge_o_path:                 /usr/local/share/java/apache-maven-3.0.3/bin:/usr/local/share/uge8.0.1/bin/lx-amd64:/usr/lib64/qt-3.3/bin:/carm/rational/releases/PurifyPlus.7.0.1.0-002/i386_linux2/bin:/usr/local/bin:/sbin:/usr/sbin:/bin:/usr/bin:/usr/local/share/texttest/site/bin:/usr/local/share/java/apache-maven-2.2.1/bin
< sge_o_shell:                /bin/bash
< sge_o_workdir:              /nfs/vm/texttest-tmp/pws/<target tmp dir>/slavelogs
< sge_o_host:                 gotburh03p
< account:                    sge
< cwd:                        /nfs/vm/texttest-tmp/pws/<target tmp dir>/slavelogs
< stderr_path_list:           NONE:NONE:Test-UPS0204_2_TripOptionsBuild.Preferences.Supplementary.OpenPredefinedPlan-Gantt_Editor.master.x86_64_linux.errors
< hard resource_list:         carmarch=*x86_64_linux*,firebird_slot=TRUE,jenkins_tests=1,login=0,osversion=RHEL6*,short=TRUE
< mail_list:                  nightjob@gotburh03p.got.jeppesensystems.com
< notify:                     TRUE
< job_name:                   Test-UPS0204_2_TripOptionsBuild.Preferences.Supplementary.OpenPredefinedPlan-Gantt_Editor.master.x86_64_linux
< stdout_path_list:           NONE:NONE:Test-UPS0204_2_TripOptionsBuild.Preferences.Supplementary.OpenPredefinedPlan-Gantt_Editor.master.x86_64_linux.log
<truncated after showing first 30 lines>
---------- Differences in output ----------
3,17c3
< S: SLEEP.killing test-case Basic not compared:
< SGE job exited
< Job ID was 1
< ---------- Full accounting info from SGE ----------
< SGE job entered error state: 1
< TextTest terminated this job as a result. SGE's error reason follows:
< error reason    1:          11/18/2012 18:12:49 [0:60266]: something evil happened on the execution machine
< Results:
< 
< Tests that did not succeed:
<   SLEEP.killing test-case Basic not compared: SGE job exited
< 
< Tests Run: 1, Incomplete: 1
< Creating batch report for application SLEEP ...
< File written.
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1
---------- Missing result in pythonmocks ----------
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
<-PYT:time.sleep(0.5)
---------- Missing result in targetReport ----------
unrunnable=1
<today's date>
SLEEP killing : 1 tests : 1 unrunnable

Summary of all Unsuccessful tests follows...
---------------------------------------------------------------------------------
The following tests could not be run : 
In SLEEP.killing test-suite TargetApp:
  - SLEEP.killing test-case Basic : SGE job exited


Details of all Unsuccessful tests follows...
---------------------------------------------------------------------------------

Detailed information for the tests that could not be run follows...
--------------------------------------------------------
TEST could not be run : SLEEP.killing test-case Basic (under Basic)
SGE job exited
Job ID was 1
---------- Full accounting info from SGE ----------
SGE job entered error state: 1
TextTest terminated this job as a result. SGE's error reason follows:
error reason    1:          11/18/2012 18:12:49 [0:60266]: something evil happened on the execution machine

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems ErrorHandling Console PollSlaveCrash ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
1,2c1,19
< WARNING: error produced by slave job 'Test-Basic-SLEEP.killing'
< Some Beautiful Error Message...
---
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
6d5
< ------------batchreport.sleep.killing
8,12d6
< ----------------Test-Basic-SLEEP.killing.errors
< ------------sleep.killing
< ----------------Basic
< --------------------framework_tmp
< ------------------------teststate
---------- Differences in externalmocks ----------
2,11d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Basic-SLEEP.killing -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep.killing -l -tp Basic -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port> -b short_local""
< ->FIL:<target tmp dir>
< ->OUT:Your job 1 (stupid) has been submitted.
< ->FIL:<target tmp dir>
< <-CMD:qstat
< <-CMD:qacct -j 1
< ->OUT:Some random accounting info
< -- Unordered text as found by filter 'Completed submission of all tests' --
< <-SRV:Completed submission of all tests
< 
---------- Differences in output ----------
3,19c3
< S: SLEEP.killing test-case Basic not compared:
< SGE job exited
< Job ID was 1
< ---------- Error messages written by SGE job ----------
< Some Beautiful Error Message...
< With Extra Nonsense
< ---------- Full accounting info from SGE ----------
< Some random accounting info
< 
< Results:
< 
< Tests that did not succeed:
<   SLEEP.killing test-case Basic not compared: SGE job exited
< 
< Tests Run: 1, Incomplete: 1
< Creating batch report for application SLEEP ...
< File written.
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1
---------- Missing result in pythonmocks ----------
<-PYT:time.sleep(0.5)
---------- Missing result in targetReport ----------
unrunnable=1
<today's date>
SLEEP killing : 1 tests : 1 unrunnable

Summary of all Unsuccessful tests follows...
---------------------------------------------------------------------------------
The following tests could not be run : 
In SLEEP.killing test-suite TargetApp:
  - SLEEP.killing test-case Basic : SGE job exited


Details of all Unsuccessful tests follows...
---------------------------------------------------------------------------------

Detailed information for the tests that could not be run follows...
--------------------------------------------------------
TEST could not be run : SLEEP.killing test-case Basic (under Basic)
SGE job exited
Job ID was 1
---------- Error messages written by SGE job ----------
Some Beautiful Error Message...
With Extra Nonsense
---------- Full accounting info from SGE ----------
Some random accounting info

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems ErrorHandling Console PreviousWriteDir ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
5d4
< ------------batchreport.sleep.startdigit
7,10d5
< ------------sleep.startdigit
< ----------------123Test
< --------------------framework_tmp
< ------------------------teststate
---------- Differences in externalmocks ----------
2,190d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-123Test-SLEEP.startdigit -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep.startdigit -l -tp 123Test -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port> -b console""
< ->OUT:Your job 39454 ("Test-123Test-SLEEP.startdigit") has been submitted
< ->CLI:25517
< sleep.startdigit:123Test
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on abbeville'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (abbeville)'
< p10
< p13
< asS'observers'
< p14
<truncated after showing first 30 lines>
---------- Differences in output ----------
4,12c4
< S: SLEEP.startdigit test-case 123Test FAILED on <machine> : new results in errors,output
< Results:
< 
< Tests that did not succeed:
<   SLEEP.startdigit test-case 123Test FAILED on <machine> : new results in errors,output
< 
< Tests Run: 1, Failures: 1
< Creating batch report for application SLEEP ...
< File written.
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1
---------- Missing result in targetReport ----------
FAILED=1
<today's date>
SLEEP startdigit : 1 tests : 1 FAILED

Summary of all Unsuccessful tests follows...
---------------------------------------------------------------------------------
The following tests FAILED : 
In SLEEP.startdigit test-suite TargetApp:
  - SLEEP.startdigit test-case 123Test : errors new(+)


Details of all Unsuccessful tests follows...
---------------------------------------------------------------------------------

Detailed information for the tests that FAILED follows...
--------------------------------------------------------
TEST FAILED on : SLEEP.startdigit test-case 123Test (under 123Test)
---------- New result in errors ----------
---------- New result in output ----------
Sleeping for 1 seconds...
Done

2 TESTS were terminated before completion (TIMEOUT) on ts-sim-scenario-ba :

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,12d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Basic-SLEEP -l hostname=no_such_machine -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep -l -tp Basic -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->ERR:Unable to run job: can't resolve hostname "no_such_machine".
< Exiting.
< ->EXC:1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Basic2-SLEEP -l hostname=no_such_machine -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep -l -tp Basic2 -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->ERR:Unable to run job: can't resolve hostname "no_such_machine".
< Exiting.
< ->EXC:1
< -- Unordered text as found by filter 'Completed submission of all tests' --
< <-SRV:Completed submission of all tests
< 
---------- Differences in output ----------
3,11c3
< S: SLEEP test-case Basic not compared:
< Failed to submit to SGE (Unable to run job: can't resolve hostname "no_such_machine".)
< Submission command was 'qsub -N Test-Basic-SLEEP -l hostname=no_such_machine -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test write dir>/texttesttmp/grid_core_files/slave_start_errors.<user> ... '
< 
< Q: Submitting SLEEP test-case Basic2 to default SGE queue
< S: SLEEP test-case Basic2 not compared:
< Failed to submit to SGE (Unable to run job: can't resolve hostname "no_such_machine".)
< Submission command was 'qsub -N Test-Basic2-SLEEP -l hostname=no_such_machine -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test write dir>/texttesttmp/grid_core_files/slave_start_errors.<user> ... '
< 
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1
QueueSystems ErrorHandling SgeOnly BadRunMachine ( Last six runs Sep2019 )
QueueSystems ErrorHandling SgeOnly BadRunMachineLocal ( Last six runs Sep2019 )

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems ErrorHandling SgeOnly DuplicateSlave ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
6d5
< ------------batchreport.sleep
8,14d6
< ------------sleep
< ----------------Basic
< --------------------framework_tmp
< ------------------------teststate
< ----------------Basic2
< --------------------framework_tmp
< ------------------------teststate
---------- Differences in externalmocks ----------
2,418d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Basic-SLEEP -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep -l -tp Basic -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port> -b local""
< ->OUT:Your job 39337 ("Test-Basic-SLEEP") has been submitted
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Basic2-SLEEP -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep -l -tp Basic2 -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port> -b local""
< ->OUT:Your job 39338 ("Test-Basic2-SLEEP") has been submitted
< ->CLI:22252
< sleep:Basic2
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on fake_machine1'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (fake_machine1)'
< p10
< p13
<truncated after showing first 30 lines>
---------- Differences in output ----------
3,13c3
< Q: Submitting SLEEP test-case Basic2 to default SGE queue
< S: SLEEP test-case Basic2 FAILED on <machine> : new results in errors,output
< S: SLEEP test-case Basic succeeded on <machine>
< Results:
< 
< Tests that did not succeed:
<   SLEEP test-case Basic2 FAILED on <machine> : new results in errors,output
< 
< Tests Run: 2, Failures: 1
< Creating batch report for application SLEEP ...
< File written.
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1
---------- Missing result in targetReport ----------
FAILED=1,succeeded=1
<today's date>
SLEEP : 2 tests : 1 FAILED

Summary of all Unsuccessful tests follows...
---------------------------------------------------------------------------------
The following tests FAILED : 
In SLEEP test-suite TargetApp:
  - SLEEP test-case Basic2 : errors new(+)


Details of all Unsuccessful tests follows...
---------------------------------------------------------------------------------

Detailed information for the tests that FAILED follows...
--------------------------------------------------------
TEST FAILED on : SLEEP test-case Basic2 (under Basic2)
---------- New result in errors ----------
---------- New result in output ----------
Sleeping for 5 seconds...
Done

Summary of all Successful tests follows...
---------------------------------------------------------------------------------
The following tests succeeded : 
In SLEEP test-suite TargetApp:
  - SLEEP test-case Basic


TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems ErrorHandling SgeOnly NonsenseFromSlave ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
1,3c1,19
< WARNING: Received request from hostname <machine> (process 15007)
< which could not be parsed:
< 'nonsense message'
---
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
6d5
< ------------batchreport.sleep
8,11d6
< ------------sleep
< ----------------Basic2
< --------------------framework_tmp
< ------------------------teststate
---------- Differences in externalmocks ----------
2,228d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Basic2-SLEEP -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep -l -tp Basic2 -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port> -b local""
< ->OUT:Your job 39338 ("Test-Basic2-SLEEP") has been submitted
< ->CLI:22252
< sleep:Basic2
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on fake_machine1'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (fake_machine1)'
< p10
< p13
< asS'observers'
< p14
<truncated after showing first 30 lines>
---------- Differences in output ----------
3,11c3
< S: SLEEP test-case Basic2 FAILED on <machine> : new results in errors,output
< Results:
< 
< Tests that did not succeed:
<   SLEEP test-case Basic2 FAILED on <machine> : new results in errors,output
< 
< Tests Run: 1, Failures: 1
< Creating batch report for application SLEEP ...
< File written.
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1
---------- Missing result in targetReport ----------
FAILED=1
<today's date>
SLEEP : 1 tests : 1 FAILED

Summary of all Unsuccessful tests follows...
---------------------------------------------------------------------------------
The following tests FAILED : 
In SLEEP test-suite TargetApp:
  - SLEEP test-case Basic2 : errors new(+)


Details of all Unsuccessful tests follows...
---------------------------------------------------------------------------------

Detailed information for the tests that FAILED follows...
--------------------------------------------------------
TEST FAILED on : SLEEP test-case Basic2 (under Basic2)
---------- New result in errors ----------
---------- New result in output ----------
Sleeping for 5 seconds...
Done

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems ErrorHandling SgeOnly UnexpectedOutput ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.startdigit.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,202d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-123Test-SLEEP.startdigit -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep.startdigit -l -tp 123Test -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->OUT:Today SGE is feeling talkative
< Your job 28849 ("Test-123Test-SLEEP.startdigit") has been submitted
< ->CLI:31938
< sleep.startdigit:123Test
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on shelbyville'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (shelbyville)'
< p10
< p13
< asS'observers'
<truncated after showing first 30 lines>
---------- Differences in output ----------
3,4c3
< Unexpected output from qsub : Today is feeling talkative
< S: SLEEP.startdigit test-case 123Test succeeded on <machine>
---
> Terminating testing due to external interruption

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems Limits Reporting CpuLimit ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
6d5
< ------------batchreport.sleep.killing3.cpulimit
8,11d6
< ------------sleep.killing3.cpulimit
< ----------------Basic
< --------------------framework_tmp
< ------------------------teststate
---------- Differences in externalmocks ----------
2,258d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Basic-SLEEP.killing3.cpulimit -l s_cpu=10 -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep.killing3.cpulimit -l -tp Basic -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port> -b short_local""
< ->OUT:Your job 1634274 ("Test-Basic-SLEEP.killing3.cpulimit") has been submitted
< ->CLI:1579
< sleep.killing3.cpulimit:Basic
< (itexttestlib.default.runtest
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on kilimanjaro'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (kilimanjaro)'
< p10
< p13
< asS'observers'
< p14
<truncated after showing first 30 lines>
---------- Differences in output ----------
3,11c3
< S: SLEEP.killing3.cpulimit test-case Basic were terminated before completion (CPULIMIT) on <machine> : differences in errors,output
< Results:
< 
< Tests that did not succeed:
<   SLEEP.killing3.cpulimit test-case Basic were terminated before completion (CPULIMIT) on <machine> : differences in errors,output
< 
< Tests Run: 1, Incomplete: 1
< Creating batch report for application SLEEP ...
< File written.
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1
---------- Missing result in targetReport ----------
killed=1
<today's date>
SLEEP killing3.cpulimit : 1 tests : 1 killed

Summary of all Unsuccessful tests follows...
---------------------------------------------------------------------------------
The following tests were terminated before completion : 
In SLEEP.killing3.cpulimit test-suite TargetApp:
  - SLEEP.killing3.cpulimit test-case Basic : CPULIMIT


Details of all Unsuccessful tests follows...
---------------------------------------------------------------------------------

Detailed information for the tests that were terminated before completion follows...
--------------------------------------------------------
TEST were terminated before completion (CPULIMIT) on : SLEEP.killing3.cpulimit test-case Basic (under Basic)
Test exceeded maximum cpu time allowed
---------- Differences in errors ----------
0a1,4
> Traceback (most recent call last):
>     if time.clock() >= fullTime:
> KeyboardInterrupt
---------- Differences in output ----------
2d1
< Done.

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems Limits Reporting RunLimit ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
6d5
< ------------batchreport.sleep.killing3
8,11d6
< ------------sleep.killing3
< ----------------Basic
< --------------------framework_tmp
< ------------------------teststate
---------- Differences in externalmocks ----------
2,254d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Basic-SLEEP.killing3 -l s_rt=10 -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep.killing3 -l -tp Basic -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port> -b short_local""
< ->OUT:Your job 28851 ("Test-Basic-SLEEP.killing3") has been submitted
< ->CLI:30581
< sleep.killing3:Basic
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on dolgeville'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (dolgeville)'
< p10
< p13
< asS'observers'
< p14
<truncated after showing first 30 lines>
---------- Differences in output ----------
3,11c3
< S: SLEEP.killing3 test-case Basic were terminated before completion (RUNLIMIT) on <machine> : differences in output
< Results:
< 
< Tests that did not succeed:
<   SLEEP.killing3 test-case Basic were terminated before completion (RUNLIMIT) on <machine> : differences in output
< 
< Tests Run: 1, Incomplete: 1
< Creating batch report for application SLEEP ...
< File written.
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1
---------- Missing result in targetReport ----------
killed=1
<today's date>
SLEEP killing3 : 1 tests : 1 killed

Summary of all Unsuccessful tests follows...
---------------------------------------------------------------------------------
The following tests were terminated before completion : 
In SLEEP.killing3 test-suite TargetApp:
  - SLEEP.killing3 test-case Basic : RUNLIMIT


Details of all Unsuccessful tests follows...
---------------------------------------------------------------------------------

Detailed information for the tests that were terminated before completion follows...
--------------------------------------------------------
TEST were terminated before completion (RUNLIMIT) on : SLEEP.killing3 test-case Basic (under Basic)
Test exceeded maximum wallclock time allowed by SGE (s_rt parameter)
---------- Differences in output ----------
1,2c1
< Sleeping for 5 seconds...
< Done
---
> Sleeping for 1000 seconds...

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems NormalOperation Console Batch2Tests ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
6d5
< ------------batchreport.sleep
8,25d6
< ----------------Test-Basic-SLEEP.errors
< ----------------Test-Basic-SLEEP.log
< ----------------Test-Basic2-SLEEP.errors
< ----------------Test-Basic2-SLEEP.log
< ------------sleep
< ----------------Basic
< --------------------errors.sleep
< --------------------output.sleep
< --------------------framework_tmp
< ------------------------errors.sleepcmp
< ------------------------errors.sleeporigcmp
< ------------------------teststate
< ----------------Basic2
< --------------------errors.sleep
< --------------------output.sleep
< --------------------framework_tmp
< ------------------------errors.sleepcmp
< ------------------------teststate
---------- Differences in externalmocks ----------
2,405d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Basic-SLEEP -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep -l -tp Basic -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port> -b local""
< ->OUT:Your job 2685777 ("Test-Basic-SLEEP") has been submitted
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Basic2-SLEEP -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep -l -tp Basic2 -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port> -b local""
< ->OUT:Your job 2685778 ("Test-Basic2-SLEEP") has been submitted
< ->FIL:<target tmp dir>
< ->CLI:11048
< sleep:Basic2
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on sunriver'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (sunriver)'
< p10
<truncated after showing first 30 lines>
---------- Differences in output ----------
3,13c3
< Q: Submitting SLEEP test-case Basic2 to default SGE queue
< S: SLEEP test-case Basic2 FAILED on <machine> : new results in errors,output
< S: SLEEP test-case Basic succeeded on <machine>
< Results:
< 
< Tests that did not succeed:
<   SLEEP test-case Basic2 FAILED on <machine> : new results in errors,output
< 
< Tests Run: 2, Failures: 1
< Creating batch report for application SLEEP ...
< File written.
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1
---------- Missing result in targetReport ----------
FAILED=1,succeeded=1
<today's date>
SLEEP : 2 tests : 1 FAILED

Summary of all Unsuccessful tests follows...
---------------------------------------------------------------------------------
The following tests FAILED : 
In SLEEP test-suite TargetApp:
  - SLEEP test-case Basic2 : errors new(+)


Details of all Unsuccessful tests follows...
---------------------------------------------------------------------------------

Detailed information for the tests that FAILED follows...
--------------------------------------------------------
TEST FAILED on : SLEEP test-case Basic2 (under Basic2)
---------- New result in errors ----------
---------- New result in output ----------
Sleeping for 5 seconds...
Done

Summary of all Successful tests follows...
---------------------------------------------------------------------------------
The following tests succeeded : 
In SLEEP test-suite TargetApp:
  - SLEEP test-case Basic


TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems NormalOperation Console InterleaveTests ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,20d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Test-HELLO -l no_such_resource -w e -notify -m n -cwd -b y -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a hello -l -tp Test -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->ERR:Unable to run job: unknown resource "no_such_resource".
< Exiting.
< ->EXC:1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Test-HELLO2 -l no_such_resource -w e -notify -m n -cwd -b y -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a hello2 -l -tp Test -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->ERR:Unable to run job: unknown resource "no_such_resource".
< Exiting.
< ->EXC:1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Test_copy-HELLO2 -l no_such_resource -w e -notify -m n -cwd -b y -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a hello2 -l -tp Test_copy -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->ERR:Unable to run job: unknown resource "no_such_resource".
< Exiting.
< ->EXC:1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Test_copy-HELLO -l no_such_resource -w e -notify -m n -cwd -b y -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a hello -l -tp Test_copy -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->ERR:Unable to run job: unknown resource "no_such_resource".
< Exiting.
< ->EXC:1
< -- Unordered text as found by filter 'Completed submission of all tests' --
< <-SRV:Completed submission of all tests
< 
---------- Differences in output ----------
4,22c4
< S: HELLO test-case Test not compared:
< Failed to submit to SGE (Unable to run job: unknown resource "no_such_resource".)
< Submission command was 'qsub -N Test-Test-HELLO -l no_such_resource -w e -notify -m n -cwd -b y -o /dev/null -e <test write dir>/texttesttmp/grid_core_files/slave_start_errors.<user> ... '
< 
< Q: Submitting HELLO2 test-case Test to default SGE queue
< S: HELLO2 test-case Test not compared:
< Failed to submit to SGE (Unable to run job: unknown resource "no_such_resource".)
< Submission command was 'qsub -N Test-Test-HELLO2 -l no_such_resource -w e -notify -m n -cwd -b y -o /dev/null -e <test write dir>/texttesttmp/grid_core_files/slave_start_errors.<user> ... '
< 
< Q: Submitting HELLO2 test-case Test_copy to default SGE queue
< S: HELLO2 test-case Test_copy not compared:
< Failed to submit to SGE (Unable to run job: unknown resource "no_such_resource".)
< Submission command was 'qsub -N Test-Test_copy-HELLO2 -l no_such_resource -w e -notify -m n -cwd -b y -o /dev/null -e <test write dir>/texttesttmp/grid_core_files/slave_start_errors.<user> ... '
< 
< Q: Submitting HELLO test-case Test_copy to default SGE queue
< S: HELLO test-case Test_copy not compared:
< Failed to submit to SGE (Unable to run job: unknown resource "no_such_resource".)
< Submission command was 'qsub -N Test-Test_copy-HELLO -l no_such_resource -w e -notify -m n -cwd -b y -o /dev/null -e <test write dir>/texttesttmp/grid_core_files/slave_start_errors.<user> ... '
< 
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems NormalOperation Console SelfDiagnostics ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
8a9,10
> --------console.startdigit.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,201d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-123Test-SLEEP.startdigit -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep.startdigit -l -tp 123Test -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port> -x -xr <test_write_dir>/log/logging.debug -xw <test_write_dir>/log/Test-123Test-SLEEP.startdigit"
"
< ->OUT:Your job 28849 ("Test-123Test-SLEEP.startdigit") has been submitted
< ->CLI:31938
< sleep.startdigit:123Test
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on shelbyville'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (shelbyville)'
< p10
< p13
< asS'observers'
< p14
<truncated after showing first 30 lines>
---------- Differences in output ----------
4c4
< standard log INFO - S: SLEEP.startdigit test-case 123Test succeeded on <machine>
---
> Terminating testing due to external interruption

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems NormalOperation Console SelfDiagnosticsExplicit ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.startdigit.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,201d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-123Test-SLEEP.startdigit -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep.startdigit -l -tp 123Test -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port> -x -xr <test_write_dir>/log/logging.mylog -xw <test_write_dir>/log/Test-123Test-SLEEP.startdigit"
"
< ->OUT:Your job 28849 ("Test-123Test-SLEEP.startdigit") has been submitted
< ->CLI:31938
< sleep.startdigit:123Test
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on shelbyville'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (shelbyville)'
< p10
< p13
< asS'observers'
< p14
<truncated after showing first 30 lines>
---------- Differences in output ----------
3c3
< S: SLEEP.startdigit test-case 123Test succeeded on <machine>
---
> Terminating testing due to external interruption

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems NormalOperation Console SlaveTransferFlags ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.startdigit.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,201d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-123Test-SLEEP.startdigit -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep.startdigit -l -tp 123Test -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port> -ignorefilters -keepslave""
< ->OUT:Your job 28848 ("Test-123Test-SLEEP.startdigit") has been submitted
< ->CLI:14589
< sleep.startdigit:123Test
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on forestville'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (forestville)'
< p10
< p13
< asS'observers'
< p14
<truncated after showing first 30 lines>
---------- Differences in output ----------
3c3
< S: SLEEP.startdigit test-case 123Test succeeded on <machine>
---
> Terminating testing due to external interruption

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems NormalOperation Console TestNameWithDigit ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.startdigit.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,201d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-123Test-SLEEP.startdigit -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a sleep.startdigit -l -tp 123Test -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->OUT:Your job 28849 ("Test-123Test-SLEEP.startdigit") has been submitted
< ->CLI:31938
< sleep.startdigit:123Test
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on shelbyville'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (shelbyville)'
< p10
< p13
< asS'observers'
< p14
<truncated after showing first 30 lines>
---------- Differences in output ----------
3c3
< S: SLEEP.startdigit test-case 123Test succeeded on <machine>
---
> Terminating testing due to external interruption

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems Performance Miscellaneous AutoForcePerfResource ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,174d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-1Minute-PERF -w e -notify -m n -cwd -b y -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a perf -l -tp 1Minute -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->OUT:Your job 8582816 ("Test-1Minute-PERF") has been submitted
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-5Minutes-PERF -l performance_resource -w e -notify -m n -cwd -b y -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a perf -l -tp 5Minutes -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->ERR:Unable to run job: unknown resource "performance_resource".
< Exiting.
< ->EXC:1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-NoLogFile-PERF -l performance_resource -w e -notify -m n -cwd -b y -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a perf -l -tp NoLogFile -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->ERR:Unable to run job: unknown resource "performance_resource".
< Exiting.
< ->EXC:1
< ->CLI:9141
< perf:1Minute
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on perf_machine1'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
<truncated after showing first 30 lines>
---------- Differences in output ----------
3,14c3
< Q: Submitting PERF test-case 5Minutes to default SGE queue
< S: PERF test-case 5Minutes not compared:
< Failed to submit to SGE (Unable to run job: unknown resource "performance_resource".)
< Submission command was 'qsub -N Test-5Minutes-PERF -l performance_resource -w e -notify -m n -cwd -b y -o /dev/null -e <test write dir>/texttesttmp/grid_core_files/slave_start_errors.<user> ... '
< 
< Q: Submitting PERF test-case NoLogFile to default SGE queue
< S: PERF test-case NoLogFile not compared:
< Failed to submit to SGE (Unable to run job: unknown resource "performance_resource".)
< Submission command was 'qsub -N Test-NoLogFile-PERF -l performance_resource -w e -notify -m n -cwd -b y -o /dev/null -e <test write dir>/texttesttmp/grid_core_files/slave_start_errors.<user> ... '
< 
< S: PERF test-case 1Minute FAILED on <machine> : new results in logperf
< View details(v), Approve(a) or continue(any other key)?
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems Performance Miscellaneous ForcePerfResource ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.resource.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,166d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Failures-PERF.resource -l performance_resource -w e -notify -m n -cwd -b y -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a perf.resource -l -tp Failures -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->OUT:Your job 4867727 ("Test-Failures-PERF.resource") has been submitted
< ->CLI:9141
< perf.resource:Failures
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on perf_machine1'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (perf_machine1)'
< p10
< p13
< asS'lifecycleChange'
< p14
<truncated after showing first 30 lines>
---------- Differences in output ----------
3,4c3
< S: PERF.resource test-case Failures FAILED on <machine> : new results in logperf
< View details(v), Approve Version resource(1), Approve(a) or continue(any other key)?
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems Performance Miscellaneous SubmitParallel ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.resource.parallel.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,170d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Failures-PERF.resource.parallel -pe * 2 -w e -notify -m n -cwd -b y -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a perf.resource.parallel -l -tp Failures -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->OUT:Your job 4867448 ("Test-Failures-PERF.resource.parallel") has been submitted
< ->CLI:2521
< perf.resource.parallel:Failures
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on perf_machine1,perf_machine2'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (perf_machine1,perf_machine2)'
< p10
< p13
< aS'perf_machine2'
< p14
<truncated after showing first 30 lines>
---------- Differences in output ----------
3,10c3
< S: PERF.resource.parallel test-case Failures FAILED on <machine> perf_machine2 : new results in logperf
< View details(v), Approve Version resource.parallel(1), Approve Version resource(2), Approve Version parallel(3), Approve(a) or continue(any other key)?
< ------------------ New result in logperf --------------------
< Total Logperf  :      5.0 seconds on
< Also on perf_machine1 : geoff's job 'Test-Failures-PERF.resource.parallel'
< Also on perf_machine1 : altenst's job 'Test-439crew.demo.38hr.cg.xprs.25KNR_perm3.perm_jk_spp-CS.master.i386_linux.perm'
< Also on perf_machine2 : altenst's job 'Test-99crew.3weeks.vert.apc.global.cg_perm0.perm_ib_crm-CS.master.i386_linux.perm'
< Approve Version resource.parallel(1), Approve Version resource(2), Approve Version parallel(3), Approve(a) or continue(any other key)?
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems SlaveReuse Master AllowRacing BasicReuse ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,392d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Test-HELLO -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a hello -l -tp Test -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->OUT:Your job 25617 ("Test-Test-HELLO") has been submitted
< ->CLI:10538
< hello:Test
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on tiptonville'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (tiptonville)'
< p10
< p13
< asS'observers'
< p14
<truncated after showing first 30 lines>
---------- Differences in output ----------
3,6c3
< Waiting Q thread for 5 seconds
< S: HELLO test-case Test FAILED on <machine> : differences in output
< View details(v), Approve(a) or continue(any other key)?
< S: HELLO test-case Test succeeded on <machine>
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems SlaveReuse Master AllowRacing RespectParallel ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.parallel.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,397d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Test-HELLO.parallel -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a hello.parallel -l -tp Test -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->OUT:Your job 32549 ("Test-Test-HELLO.parallel") has been submitted
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Test.Suite-HELLO.parallel -pe * 2 -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a hello.parallel -l -tp Suite/Test -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->OUT:Your job 32550 ("Test-Test.Suite-HELLO.parallel") has been submitted
< ->CLI:25381
< hello.parallel:Test
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on titusville'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (titusville)'
< p10
< p13
<truncated after showing first 30 lines>
---------- Differences in output ----------
3,7c3
< Waiting Q thread for 5 seconds
< Q: Submitting HELLO.parallel test-case Test to default SGE queue
< S: HELLO.parallel test-case Test FAILED on <machine> : differences in output
< View details(v), Approve Version parallel(1), Approve(a) or continue(any other key)?
< S: HELLO.parallel test-case Test succeeded on <machine> schefferville
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems SlaveReuse Master AllowRacing RespectResources ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.resource.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,203d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Test-HELLO.resource -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a hello.resource -l -tp Test -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->OUT:Your job 8067 ("Test-Test-HELLO.resource") has been submitted
< ->CLI:27622
< hello.resource:Test
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on schellville'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (schellville)'
< p10
< p13
< asS'observers'
< p14
<truncated after showing first 30 lines>
---------- Differences in output ----------
3,10c3
< Waiting Q thread for 5 seconds
< S: HELLO.resource test-case Test FAILED on <machine> : differences in output
< View details(v), Approve Version resource(1), Approve(a) or continue(any other key)?
< Q: Submitting HELLO.resource test-case Test to default SGE queue
< S: HELLO.resource test-case Test not compared:
< Failed to submit to SGE (Unable to run job: unknown resource "no_such_resource".)
< Submission command was 'qsub -N Test-Test.Suite-HELLO.resource -l no_such_resource -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test write dir>/texttesttmp/grid_core_files/slave_start_errors.<user> ... '
< 
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems SlaveReuse Master ForceReuse Console CheckoutChange ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.++altcheckout.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,402d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Test.Suite-HELLO -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a hello -l -tp Suite/Test -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->OUT:Your job 28875 ("Test-Test.Suite-HELLO") has been submitted
< <-SRV:Completed submission of tests up to capacity
< ->CLI:11698
< hello:Suite/Test
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on centreville.carmen.se'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (centreville.carmen.se)'
< p10
< p13
< asS'observers'
<truncated after showing first 30 lines>
---------- Differences in output ----------
4,5c4
< S: HELLO test-case Test succeeded on <machine>
< S: HELLO.altcheckout test-case Test succeeded on <machine>
---
> Terminating testing due to external interruption

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems SlaveReuse Master ForceReuse Console ClearEnvironment ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.changeenv.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,403d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Test-HELLO.changeenv -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a hello.changeenv -l -tp Test -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->OUT:Your job 28864 ("Test-Test-HELLO.changeenv") has been submitted
< <-SRV:Completed submission of tests up to capacity
< ->CLI:11596
< hello.changeenv:Test
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on centreville.carmen.se'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (centreville.carmen.se)'
< p10
< p13
< asS'observers'
<truncated after showing first 30 lines>
---------- Differences in output ----------
3,5c3
< S: HELLO.changeenv test-case Test FAILED on <machine> : differences in output
< View details(v), Approve Version changeenv(1), Approve(a) or continue(any other key)?
< S: HELLO.changeenv test-case Test succeeded on <machine>
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems SlaveReuse Master ForceReuse Console NoReuseWithProxy ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,209d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; env 'TEXTTEST_SUBMIT_COMMAND_ARGS=['qsub', '-sync', 'y', '-V', '-N', 'Test-Test-HELLO', '-w', 'e', '-notify', '-m', 'n', '-cwd', '-b', 'y', '-v', 'STANDARD,TEXTTEST_SUBMIT_COMMAND_ARGS', '-o', '/dev/null', '-e', '<test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> 'exec', '$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a hello -l -tp Test -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <
host:port>"']' qsub -N Test-Test-HELLO -w e -notify -m n -cwd -b y -v STANDARD,TEXTTEST_SUBMIT_COMMAND_ARGS -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <test_write_dir>/TargetApp/hello.py""
< ->OUT:Your job 28855 ("Test-Test-HELLO") has been submitted
< <-SRV:Completed submission of tests up to capacity
< ->CLI:11408
< hello:Test
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on centreville.carmen.se'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (centreville.carmen.se)'
< p10
< p13
< asS'observers'
<truncated after showing first 30 lines>
---------- Differences in output ----------
3,9c3
< S: HELLO test-case Test FAILED on <machine> : differences in output
< View details(v), Approve(a) or continue(any other key)?
< Q: Submitting HELLO test-case Test to default SGE queue
< S: HELLO test-case Test not compared:
< Failed to submit to SGE (Unable to run job: unknown resource "no_such_resource".)
< Submission command was 'qsub -N Test-Test.Suite-HELLO -w e -notify -m n -cwd -b y -v STANDARD,TEXTTEST_SUBMIT_COMMAND_ARGS -o /dev/null -e <test write dir>/texttesttmp/grid_core_files/slave_start_errors.<user> ... '
< 
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems SlaveReuse Master ForceReuse Console ReconnectIgnoreResources ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.resource.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,234d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Test-HELLO.resource -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a hello.resource -l -tp Test -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port> -reconnect -reconnfull grid""
< ->OUT:Your job 2269606 ("Test-Test-HELLO.resource") has been submitted
< <-SRV:Completed submission of tests up to capacity
< ->FIL:<target tmp dir>
< ->CLI:13734
< hello.resource:Test
< (itexttestlib.default.rundependent
< Filtering
< p1
< (dp2
< S'category'
< p3
< S'initial_filter'
< p4
< sS'freeText'
< p5
< S'Filtering stored result files on kisaumi'
< p6
< sS'started'
< p7
< I0
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S''
< p12
< asS'observers'
<truncated after showing first 30 lines>
---------- Differences in output ----------
3,4c3
< S: HELLO.resource test-case Test FAILED on <machine> : missing results for errors,output
< View details(v), Approve Version resource(1), Approve(a) or continue(any other key)?
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems SlaveReuse Master ForceReuse Console RerunTest ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,602d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Test-HELLO -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a hello -l -tp Test -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->OUT:Your job 28856 ("Test-Test-HELLO") has been submitted
< <-SRV:Completed submission of tests up to capacity
< ->CLI:11505
< hello:Test
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on centreville.carmen.se'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (centreville.carmen.se)'
< p10
< p13
< asS'observers'
<truncated after showing first 30 lines>
---------- Differences in output ----------
3,5c3
< S: HELLO test-case Test succeeded on <machine>
< S: HELLO test-case Test FAILED on <machine> : differences in output
< View details(v), Approve(a) or continue(any other key)?
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems SlaveReuse Master ForceReuse Console ResourceOrderChanges ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.multiresource.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,403d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Test-HELLO.multiresource -l resource2,resource1 -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a hello.multiresource -l -tp Test -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->OUT:Your job 28856 ("Test-Test-HELLO") has been submitted
< <-SRV:Completed submission of tests up to capacity
< ->CLI:11505
< hello.multiresource:Test
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on centreville.carmen.se'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (centreville.carmen.se)'
< p10
< p13
< asS'observers'
<truncated after showing first 30 lines>
---------- Differences in output ----------
3,5c3
< S: HELLO.multiresource test-case Test FAILED on <machine> : differences in output
< View details(v), Approve Version multiresource(1), Approve(a) or continue(any other key)?
< S: HELLO.multiresource test-case Test succeeded on <machine>
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems SlaveReuse Master ForceReuse Console ResourcesOverReuse ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.resource.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,209d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Test-HELLO.resource -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a hello.resource -l -tp Test -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->OUT:Your job 28855 ("Test-Test-HELLO.resource") has been submitted
< <-SRV:Completed submission of tests up to capacity
< ->CLI:11408
< hello.resource:Test
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on centreville.carmen.se'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (centreville.carmen.se)'
< p10
< p13
< asS'observers'
<truncated after showing first 30 lines>
---------- Differences in output ----------
3,9c3
< S: HELLO.resource test-case Test FAILED on <machine> : differences in output
< View details(v), Approve Version resource(1), Approve(a) or continue(any other key)?
< Q: Submitting HELLO.resource test-case Test to default SGE queue
< S: HELLO.resource test-case Test not compared:
< Failed to submit to SGE (Unable to run job: unknown resource "no_such_resource".)
< Submission command was 'qsub -N Test-Test.Suite-HELLO.resource -l no_such_resource -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test write dir>/texttesttmp/grid_core_files/slave_start_errors.<user> ... '
< 
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems SlaveReuse Master ForceReuse Console RespectRerunLimit ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
6,13d5
< ------------hello
< ----------------Suite
< --------------------Test
< ------------------------framework_tmp
< ----------------------------teststate
< ----------------Test
< --------------------framework_tmp
< ------------------------teststate
---------- Differences in externalmocks ----------
2,602d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Test-HELLO -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a hello -l -tp Test -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port> -b self_test""
< ->OUT:Your job 28856 ("Test-Test-HELLO") has been submitted
< <-SRV:Completed submission of tests up to capacity
< ->CLI:11505
< hello:Test
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on centreville.carmen.se'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (centreville.carmen.se)'
< p10
< p13
< asS'observers'
<truncated after showing first 30 lines>
---------- Differences in output ----------
3,12c3
< S: HELLO test-case Test succeeded on <machine>
< S: HELLO test-case Test FAILED on <machine> : differences in output
< Results:
< 
< Tests that did not succeed:
<   HELLO test-case Test FAILED on <machine> : differences in output
< 
< Tests Run: 2, Failures: 1
< Creating batch report for application HELLO ...
< done.
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1
---------- Missing result in pythonmocks ----------
<-PYT:import smtplib
<-PYT:smtplib.SMTP()
->RET:Instance('SMTP', 'smtp1')
<-PYT:smtp1.connect('localhost')
->RET:(220, 'crownpoint.got.jeppesensystems.com ESMTP Sendmail 8.14.4/8.14.4; Tue, 16 Dec 2014 10:42:05 +0100')
<-PYT:smtp1.sendmail('<username>@localhost', ['tom', 'dick', 'harry'], ('From: <username>@localhost\n'
 'To: tom,dick,harry\n'
 'Subject: <today's date> HELLO : 2 tests : 1 FAILED\n'
 '\n'
 'Summary of all Unsuccessful tests follows...\n'
 '---------------------------------------------------------------------------------\n'
 'The following tests FAILED : \n'
 'In HELLO test-suite TargetApp:\n'
 '  - HELLO test-case Test  : output different\n'
 '\n'
 '\n'
 'Details of all Unsuccessful tests follows...\n'
 '---------------------------------------------------------------------------------\n'
 '\n'
 'Detailed information for the tests that FAILED follows...\n'
 '--------------------------------------------------------\n'
 'TEST FAILED on : HELLO test-case Test (under Test)\n'
 '---------- Differences in output ----------\n'
 '1c1\n'
 '< Hello\n'
 '---\n'
 '> Hello World!\n'
 '\n'
 'Summary of all Successful tests follows...\n'
 '---------------------------------------------------------------------------------\n'
<truncated after showing first 30 lines>

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems SlaveReuse Master ForceReuse Console ResubmitTest ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,604d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Test-HELLO -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a hello -l -tp Test -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->OUT:Your job 28856 ("Test-Test-HELLO") has been submitted
< <-SRV:Completed submission of tests up to capacity
< ->CLI:11505
< hello:Test
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on centreville.carmen.se'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (centreville.carmen.se)'
< p10
< p13
< asS'observers'
<truncated after showing first 30 lines>
---------- Differences in output ----------
3,6c3
< S: HELLO test-case Test succeeded on <machine>
< Q: Submitting HELLO test-case Test to default SGE queue
< S: HELLO test-case Test FAILED on <machine> : differences in output
< View details(v), Approve(a) or continue(any other key)?
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems SlaveReuse Master ForceReuse Console ReuseOnlyBeyondCapacity ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,403d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Test-HELLO -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a hello -l -tp Test -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->OUT:Your job 28856 ("Test-Test-HELLO") has been submitted
< <-SRV:Completed submission of tests up to capacity
< ->CLI:11505
< hello:Test
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on centreville.carmen.se'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (centreville.carmen.se)'
< p10
< p13
< asS'observers'
<truncated after showing first 30 lines>
---------- Differences in output ----------
3,5c3
< S: HELLO test-case Test FAILED on <machine> : differences in output
< View details(v), Approve(a) or continue(any other key)?
< S: HELLO test-case Test succeeded on <machine>
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems SlaveReuse Master ForceReuse Console SlowReadBeyondCapacity ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,405d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Test-HELLO -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a hello -l -tp Test -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->OUT:Your job 28856 ("Test-Test-HELLO") has been submitted
< <-SRV:Completed submission of tests up to capacity
< ->CLI:11505
< hello:Test
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on centreville.carmen.se'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (centreville.carmen.se)'
< p10
< p13
< asS'observers'
<truncated after showing first 30 lines>
---------- Differences in output ----------
3,6c3
< S: HELLO test-case Test FAILED on <machine> : differences in output
< View details(v), Approve(a) or continue(any other key)?
< Q: Submitting HELLO test-case Test to default SGE queue
< S: HELLO test-case Test succeeded on <machine>
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : QueueSystems SlaveReuse Master ForceReuse Console VirtualDisplayArgs ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------console.++virtdisp.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,553d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Test-HELLO -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a hello -l -tp Test -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->OUT:Your job 247198 ("Test-Test-HELLO") has been submitted
< <-SRV:Completed submission of tests up to capacity
< ->FIL:<target tmp dir>
< ->CLI:27383
< hello:Test
< (itexttestlib.default.rundependent
< Filtering
< p1
< (dp2
< S'category'
< p3
< S'initial_filter'
< p4
< sS'freeText'
< p5
< S'Filtering stored result files on kilimanjaro'
< p6
< sS'started'
< p7
< I0
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S''
< p12
< asS'observers'
<truncated after showing first 30 lines>
---------- Differences in output ----------
4,8c4
< S: HELLO test-case Test FAILED on <machine> : differences in output
< View details(v), Approve(a) or continue(any other key)?
< Q: Submitting HELLO.virtdisp test-case Test to default SGE queue
< S: HELLO.virtdisp test-case Test FAILED on <machine> : differences in output
< View details(v), Approve Version virtdisp(1), Approve(a) or continue(any other key)?
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : TestSelf EnvironmentFile Normal Console RemoveSuiteVars ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
3a4,5
> --------env.sge.<datetime>.<pid>
> ------------slavelogs
---------- Differences in externalmocks ----------
2,272d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Test1.Suite1-Env_Variables.sge -w e -notify -m n -cwd -b y -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> -v STANDARD exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a env.sge -l -tp Suite1/Test1 -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->OUT:Your job 4916976 ("Test-Test1.Suite1-Env_Variables.lsf") has been submitted
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Test2-Env_Variables.sge -w e -notify -m n -cwd -b y -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> -v STANDARD exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a env.sge -l -tp Test2 -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port>""
< ->OUT:Your job 4916977 ("Test-Test2-Env_Variables.lsf") has been submitted
< ->CLI:27047
< env.sge:Test2
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on hendersonville'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (hendersonville)'
< p10
< p13
<truncated after showing first 30 lines>
---------- Differences in output ----------
3,7c3
< Q: Submitting Env Variables.sge test-case Test2 to default SGE queue
< S: Env Variables.sge test-case Test2 FAILED on <machine> : missing results for errors
< View details(v), Approve Version sge(1), Approve(a) or continue(any other key)?
< S: Env Variables.sge test-case Test1 FAILED on <machine> : missing results for errors
< View details(v), Approve Version sge(1), Approve(a) or continue(any other key)?
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : TestSelf TestData LinkedData PartialCopyIgnoreCatalogues ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
6,9d5
< ------------tdat.queuesystem
< ----------------Test1
< --------------------framework_tmp
< ------------------------teststate
---------- Differences in externalmocks ----------
2,162d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Test1-TDAT.queuesystem -w e -notify -m n -cwd -b y -v STANDARD -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a tdat.queuesystem -l -tp Test1 -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port> -ignorecat -keeptmp""
< ->OUT:Your job 4916975 ("Test-Test1-TDAT.queuesystem") has been submitted
< ->CLI:23643
< tdat.queuesystem:Test1
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on statesville'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (statesville)'
< p10
< p13
< asS'lifecycleChange'
< p14
<truncated after showing first 30 lines>
---------- Differences in output ----------
3c3
< S: TDAT.queuesystem test-case Test1 succeeded on <machine>
---
> Terminating testing due to external interruption

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : TestSelf UITesting VirtualDisplay DisplayOnLocalHost ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
0a1,19
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
5,8d4
< ------------disp.sge
< ----------------Test
< --------------------framework_tmp
< ------------------------teststate
---------- Differences in externalmocks ----------
2,194d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Test-DISP.sge -w e -notify -m n -cwd -b y -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a disp.sge -l -tp Test -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port> -keeptmp""
< ->OUT:Your job 12316 ("Test-Test-DISP.sge") has been submitted
< ->CLI:3979
< disp.sge:Test
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on pennsville'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (pennsville)'
< p10
< p13
< asS'observers'
< p14
<truncated after showing first 30 lines>
---------- Differences in output ----------
3c3
< S: DISP.sge test-case Test succeeded on <machine>
---
> Terminating testing due to external interruption

TEST were terminated before completion (TIMEOUT) on ts-sim-scenario-ba : TestSelf UITesting VirtualDisplay OtherAppDefaultConfig ( Last six runs Sep2019 )

Test exceeded wallclock time limit of 300.0 seconds
---------- Differences in errors ----------
1a2,20
> Description of exception thrown :
> Traceback (most recent call last):
>   File "<filtered>/masterprocess.py", line <filtered>, in run
>     self.runAllTests()
>   File "<filtered>/actionrunner.py", line <filtered>, in runAllTests
>     self.runQueue(self.getTestForRun, self.runTest, "running")
>   File "<filtered>/actionrunner.py", line <filtered>, in runQueue
>     runMethod(test)
>   File "<filtered>/masterprocess.py", line <filtered>, in runTest
>     if not self.submitJob(test, submissionRules, commandArgs, slaveEnv):
>   File "<filtered>/masterprocess.py", line <filtered>, in submitJob
>     cmdArgs = self.getSubmitCmdArgs(test, submissionRules, commandArgs, slaveEnv)
>   File "<filtered>/masterprocess.py", line <filtered>, in getSubmitCmdArgs
>     return queueSystem.getSubmitCmdArgs(*args)
>   File "<filtered>/sge.py", line <filtered>, in getSubmitCmdArgs
>     qsubArgs += ["-o", os.devnull, "-e", self.getSlaveStartErrorFile()]
>   File "<filtered>/sge.py", line <filtered>, in getSlaveStartErrorFile
>     return os.path.join(self.coreFileLocation, "slave_start_errors." + os.getenv("USER"))
> TypeError: must be str, not NoneType
---------- Differences in catalogue ----------
5,8d4
< ------------disp.sge
< ----------------Test
< --------------------framework_tmp
< ------------------------teststate
---------- Differences in externalmocks ----------
2,194d1
< <-CMD:cd <test_write_dir>/texttesttmp/grid_core_files; qsub -N Test-Test-DISP.sge -w e -notify -m n -cwd -b y -o /dev/null -e <test_write_dir>/texttesttmp/grid_core_files/slave_start_errors.<user> exec "$SHELL -c "exec <path_to_texttest> -d <test_write_dir>/TargetApp -a disp.sge -l -tp Test -slave <test_write_dir>/texttesttmp/<target tmp dir> -servaddr <host:port> -keeptmp""
< ->OUT:Your job 12316 ("Test-Test-DISP.sge") has been submitted
< ->CLI:3979
< disp.sge:Test
< (itexttestlib.default
< Running
< p1
< (dp2
< S'category'
< p3
< S'running'
< p4
< sS'freeText'
< p5
< S'Running on pennsville'
< p6
< sS'started'
< p7
< I1
< sS'completed'
< p8
< I0
< sS'briefText'
< p9
< S'RUN (pennsville)'
< p10
< p13
< asS'observers'
< p14
<truncated after showing first 30 lines>
---------- Differences in output ----------
3c3
< S: DISP.sge test-case Test succeeded on <machine>
---
> Terminating testing due to external interruption
---------- Missing result in exitcode ----------
1